content
stringlengths
86
994k
meta
stringlengths
288
619
How do you write an algebraic expression that models the word phrase "6 times a number increased by 3"? | Socratic How do you write an algebraic expression that models the word phrase "6 times a number increased by 3"? 1 Answer The expression could be written as $6 q + 3$. (see the explanation below) We can choose whatever pronumeral we like - a letter to stand for a number. We most often choose $x$, but we don't have to. Let's choose $q$, just for fun. The expression is simply $6 q + 3$. Well, it would be simple, but the question is not very clear: does it mean "take 6 times the number and increase the result by 3"? That's the expression written above. Or does it mean "take 6 times the number after it has been increased by 3"? If so, the brackets would make that order clear: The expression would be $6 \left(q + 3\right)$. Impact of this question 2432 views around the world
{"url":"https://socratic.org/questions/how-do-you-write-an-algebraic-expression-that-models-the-word-phrase-6-times-a-n#219132","timestamp":"2024-11-07T00:35:25Z","content_type":"text/html","content_length":"33917","record_id":"<urn:uuid:ecae7dc9-eb54-414e-8f18-327b6ed6df95>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00147.warc.gz"}
Difference between Binary Tree and Binary Search Tree What is Binary Tree? Binary Tree is a hierarchical data structure in which each node has zero, one, or at the most, two children. Each node contains a “left” pointer, a “right” pointer, and a data element. The “root” pointer represents the topmost node in the tree. Each node in the data structure is directly connected to arbitrary number of nodes on either side, referred to as children. A null pointer represents the binary tree. There is no particular order to how the nodes are to be organized in the binary tree. Nodes with no children nodes are called leaf nodes, or external nodes. In simple terms, it defines an organized labeling function on the nodes, which in turn assign some random value to each node. Anything which has two children and one parent node is a binary tree. Binary trees are used to store information that forms a hierarchy like the file system on your personal computer. Unlike Arrays, Trees have no upper limit on the number of nodes because they are linked using pointers, like Linked Lists. Main functions of Binary Tree include representing hierarchical data, sorting data lists, providing efficient insert/delete operations, etc. Tree nodes are represented using structures in C. What is Binary Search Tree? A Binary Search Tree is a type of binary tree data structure in which the nodes are arranged in order, hence also called as “ordered binary tree”. It’s a node-based data structure which provides an efficient and fast way of sorting, retrieving, searching data. For each node, the elements in the left subtree must be less than or equal to the key in its parent node (L<P), and the elements in the right subtree must be greater than or equal to the key in its parent node (R>P). There should be no duplicate keys. In simple terms, it’s a special kind of binary tree data structure that efficiently stores and manages items in memory. It allows for fast access of information, insertion and removal of data, plus it can be used to implement lookup tables which allow for searching items by their unique keys, like searching for a person’s phone number by name. The unique keys are sorted in an organized manner, so that lookup and other dynamic operations could be performed using binary search. It supports three main operations: searching of elements, insertion of elements, and deletion of elements. Binary Search Tree allows for fast retrieval of elements stored in the tree as each node key is thoroughly compared with the root node, which discards half of the tree. Difference Between Binary Tree and Binary Search Tree 1. Definition of Binary Tree and Binary Search Tree – Binary Tree is a hierarchical data structure in which a child can have zero, one, or maximum two child nodes; each node contains a left pointer, a right pointer and a data element. There’s no particular order to how the nodes should be organized in the tree. Binary Search Tree, on the other hand, is an ordered binary tree in which there is a relative order to how the nodes should be organized. 2. Structure of Binary Tree and Binary Search Tree– The topmost node in the tree represents the root pointer in a binary tree, and the left and the right pointers represent the smaller trees on either side. It’s a specialized form of tree which represents data in a tree structure. Binary search tree, on the other hand, is a type of binary tree in which all the nodes in the left subtree are less than or equal to the value of the root node and that of the right subtree are greater than or equal to the value of the root node. 3. Operation of Binary Tree and Binary Search Tree– Binary tree can be anything that has two children and one parent. Common operations that can be performed on a binary tree are insertion, deletion, and traversal. Binary search trees are more of sorted binary trees that allows for fast and efficient lookup, insertion, and deletion of items. Unlike binary trees, binary search trees keep their keys sorted, so lookup usually implements binary search for operations. 4. Types of Binary Tree and Binary Search Tree– There are different types of binary trees, the common being the “Full Binary Tree”, “Complete Binary Tree”, “Perfect Binary Tree”, and “Extended Binary Tree”. Some common types of binary search trees include T-trees, AVL trees, Splay trees, Tango trees, Red-Black trees etc. Binary Tree vs. Binary Search Tree: Comparison Chart Binary Tree Binary Search Tree Binary Tree is a specialized form of tree which represents hierarchical data Binary Search Tree is a type of binary tree which keeps the keys in a sorted order for fast lookup. in a tree structure. Each node must have at the most two child nodes with each node being The value of the nodes in the left subtree are less than or equal to the value of the root node, and the nodes to the connected from exactly one other node by a directed edge. right subtree have values greater than or equal to the value of the root node. There is no relative order to how the nodes should be organized. It follows a definitive order to how the nodes should be organized in a tree. It’s basically a hierarchical data structure that is a collection of elements It’s a variant of the binary tree in which the nodes are arranged in a relative order. called nodes. It is used for fast and efficient lookup of data and information in a tree It is mainly used for insertion, deletion, and searching of elements. Summary of Binary Tree and Binary Search Tree While both simulate a hierarchical tree structure representing a collection of nodes with each node representing a value, they are quite different from each other in terms of how they can be implemented and utilized. A Binary Tree follows one simple rule that each parent node has no more than two child nodes, whereas a Binary Search Tree is just a variant of the binary tree which follows a relative order to how the nodes should be organized in a tree. Latest posts by Sagar Khillar (see all) Search DifferenceBetween.net : Email This Post : If you like this article or our site. Please spread the word. Share it with your friends/family. 5 Comments 1. Its very helpful to me for my semester examination Thank you 2. This is very helpful article to go through my semester exam..Thank You very much. 3. Concise form of explanation Very nice… Leave a Response References : [0]Skiena, Steven S. The Algorithm Design Manual. Berlin: Springer, 2009. Print [1]Makinson, David. Sets, Logic and Maths for Computing. Berlin, Springer, 2008. Print [2]Aho, Alfred V., Jeffrey D. Ullman, and John E. Hopcroft. Data Structures and Algorithms. London: Pearson, 1983. Print [3]"Image Credit: http://compsci2014.wikispaces.com/5.1.17+Sketch+binary+trees" [4]"Image Credit: https://stackoverflow.com/questions/28796695/why-does-binary-search-tree-tend-to-become-unbalanced-to-the-right" Articles on DifferenceBetween.net are general information, and are not intended to substitute for professional advice. The information is "AS IS", "WITH ALL FAULTS". User assumes all risk of use, damage, or injury. You agree that we have no liability for any damages.
{"url":"https://www.differencebetween.net/technology/difference-between-binary-tree-and-binary-search-tree/","timestamp":"2024-11-05T19:04:47Z","content_type":"application/xhtml+xml","content_length":"98002","record_id":"<urn:uuid:5c3461dc-76ad-4642-9dc6-1f45c3dcf37c>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00329.warc.gz"}
MPE Sustainable Planet Curriculum Workshop October 20 - 22, 2011 DIMACS Center, CoRE Building, Rutgers University, Piscataway, NJ Midge Cozzens, DIMACS, midgec at dimacs.rutgers.edu Eugene Fiorini, DIMACS, gfiorini at dimacs.rutgers.edu Fred Roberts, DIMACS, froberts at dimacs.rutgers.edu Mary Lou Zeeman, Bowdoin College and the Mathematics and Climate Research Network (MCRN) This project is jointly sponsored by: the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS), Mathematics and Climate Research Network (MCRN), Bowdoin College and the US National Science Foundation. Call for Participation: This workshop is by invitation only. Contacting the Center Document last modified on June 23, 2011.
{"url":"http://dimacs.rutgers.edu/archive/Workshops/Planet/participation.html","timestamp":"2024-11-02T12:28:33Z","content_type":"text/html","content_length":"1970","record_id":"<urn:uuid:46339cc9-5a67-44ba-9431-5638c73d4f1b>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00636.warc.gz"}
Two Dice Find all the numbers that can be made by adding the dots on two dice. Printable NRICH Roadshow resource. Here are two dice. If you add up the dots on the top you'll get $7$. Find two dice to roll yourself. Add the numbers that are on the top. What other totals could you get if you roll the dice again? Notes for adults You will need two dice to play this game. The children can count the total number of spots on the dice or add them together using number facts they already know. Record the results and explore the different totals that you can get. Help them to find all the possible combinations. Getting Started If one dice shows $6$, what could the other dice be showing? What totals would they give? What could the other dice be if one is a $5$? A $4$? How do you know you've found all the totals? Student Solutions It seems that quite a few of you enjoyed the dice rolling needed here. Joe from Clutton Primary had this work of his commented upon by an adult. Joe worked systematically to find all the possible combinations for throwing two dice. He identified the lowest possible score of 2 and the highest score of 12. At first he thought that 4 was the lowest score and was then able to correct himself. Robert from St.Michael's Primary West Midlands sent in this very thorough solution; There are eleven possible solutions to 'Two Dice'. Let's say Dice One is 'A' and Dice Two is 'B'. They are as follows : $2$ (A=$1$ and B=$1$) $3$ (A=$1$ and B=$2$; A=$2$ and B=$1$) $4$ (A=$1$ and B=$3$; A=$2$ and B=$2$; A=$3$ and B=$1$) $5$ (A=$1$ and B=$4$; A=$2$ and B=$3$; A=$3$ and B=$2$; A=$4$ and B=$1$) $6$ (A=$1$ and B=$5$; A=$2$ and B=$4$; A=$3$ and B=$3$; A=$4$ and B=$2$; A=$5$ and B=$1$) $7$ (A=$1$ and B=$6$; A=$2$ and B=$5$; A=$3$ and B=$4$; A=$4$ and B=$3$; A=$5$ and B=$2$; A=$6$ and B=$1$) $8$ (A=$2$ and B=$6$; A=$3$ and B=$5$; A=$4$ and B=$4$; A=$5$ and B=$3$; A=$6$ and B=$2$) $9$ (A=$3$ and B=$6$; A=$4$ and B=$5$; A=$5$ and B=$4$; A=$6$ and B=$3$) $10$ (A=$4$ and B=$6$; A=$5$ and B=$5$; A=$6$ and B=$4$) $11$ (A=$5$ and B=$6$; A=$6$ and B=$5$) $12$ (A=$6$ and B=$6$) Two $1-6$ dice will not add up to any other number. This solution shows the amount of numbers possible to make. However there are 36 possible Pupils from David Hoy Elementary School worked on the challenge and this was the result; Step 1: Explore In groups of three we took turns rolling the dice. Then we recorded our answers, as this was going to help us remember what numbers we found. In some groups everyone wrote down the number sentences, in other groups, each person wrote one of the addends, and the third person wrote the solution. (Ex: 4+3=7) Step 2: Making a class list of all the numbers we found and how We read the numbers we found off of our sheets checking to see if we had rolled the same as other groups, and then looking for other numbers we missed. Step 3: To make sure we had found all the numbers, the teacher asked what the largest number was and why "12, it can't be bigger than 12" "It can't be a very big number cause there is only like 4 maybe 5 sides on the dice, so not many numbers." "Six was the biggest number on the dice." "Double six is 12, so that is the biggest number." "Then the smallest has to be 2, cause double 1 is 2, and 1 is the smallest number." Teacher: Can we get all the numbers in between? "Look at the board, and put the solutions on a number line." Which is what we did, and we discovered that there were multiple ways to get some of the numbers, but only one way to get 2 and 6.... Next I think we will try and figure out how many ways we can get the numbers 3-11 using two dice. Their teacher said: What a nice problem! My class loved it, it was a great way to practise counting, counting on, and writing number sentences. Well done, thank you so much for all the contributions. Submissions that include some reflections of what happened are always good to share. Teachers' Resources Why do this problem? This activity provides a valuable experience for younger pupils to explore some simple additions while finding all possibilities. What children need to know to play this game The children need to be able to roll two dice and identify their score. Possible approach Using a dice with dots on, encourage discussion as to what numbers are represented by the faces of the dice before introducing the challenge itself. You could support the children to collect their totals on the board. Ask them how they should be arranged and see if they can suggest a systematic way of recording their results. For example, they might start with all the totals that use a $1$. In this way, you can ask the class to talk about the patterns they notice and this will help to reveal any combinations that are missing. Key questions These questions have been phrased in ways that will help you to identify the children's prior knowledge about both the number concepts involved and the strategies and mathematical thinking needed to solve the problem. Can you make a bigger/smaller total? What is the highest total you could make? What is the lowest total you could make? If one dice shows $6$, what could the other dice be showing? How will you know when you've found all the totals? Possible extension You could make use of more dice and/or dice with different numbers of faces. Alternatively, consider finding the difference between the two numbers or the product of the two numbers. Possible support Children who struggle with addition may count the dots to help them but encourage them to articulate the number sentence once they have done so. This will help them to build the visualisations of the numbers as dotty dice patterns which will support their learning of number bonds.
{"url":"https://nrich.maths.org/problems/two-dice","timestamp":"2024-11-12T00:47:30Z","content_type":"text/html","content_length":"46533","record_id":"<urn:uuid:264eab2f-b84a-4574-9d0b-03fa970e34d7>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00157.warc.gz"}
Lesson 9 - Statistical Analysis Methods Lesson 9 - Statistical Analysis Methods Lesson 9 Objectives Upon completion of this lesson, you should be able to: • Use plots, tables, and summary statistics to describe variables and relationships between variables • Identify which modeling strategy to use based on the type of data (continuous, categorical, time-to-event) • Interpret results of statistical analyses • Differentiate between odds ratios, risk ratios, and hazard ratios Epidemiologic data can be analyzed using a variety of statistical methods. Here we outline the fundamentals according to the type of outcome measure. We can generally think of outcome data as one of three types: 1) continuous, 2) categorical, and 3) time-to-event. While it is true that time-to-event is continuous, we do not always observe the true time for each person, so special consideration needs to be given in that scenario. Once the type of outcome data is known, there are standard techniques one can use to provide descriptive statistics, look at bivariable associations, and use modeling to describe the association between multiple covariates and the outcome. Example: a subset of data from the Framingham Heart Study Our motivating example will be based on the SAS-provided dataset “Heart” which includes a small subset of data from participants in the Framingham Heart Study. This dataset contains over 5000 patients from the cohort study and provides data on their baseline age, sex, weight, smoking status, cholesterol, blood pressure, and coronary heart disease (CHD) development. Patients were contacted every 2 years for over 30 years. 9.1 - Continuous outcome 9.1 - Continuous outcome From our example, we may be interested in the relationship of age with cholesterol, and want to consider a possible confounder (or effect modifier) of sex. • The outcome is cholesterol and is a continuous value. • The predictors/covariates to be considered are age and sex. Age can be either continuous, or put into categories, and sex is a categorical variable. For the continuous outcome of cholesterol, first, we can look at the distribution of the data via a histogram and by calculating descriptive statistics: Analysis Variable: Cholesterol Std Lower Upper 25th 75th N Mean Dev 95% CL 95% CL Minimum Pctl Median Pctl Maximum for Mean for Mean 5057 227.42 44.94 226.18 228.66 96.00 196.00 223.00 255.00 268.00 Here, we see that cholesterol appears normally distributed, with a mean of 227.4 and confidence interval around the mean of (226.18, 228.66). This CI is very narrow due to the large sample size. Since we have a continuous outcome, we will likely plan to use linear regression. We can do a test for normality, but with such a large sample size, even if there appears to be a deviation from normality, it is still reasonable to use linear regression. With smaller datasets, or highly skewed data, a transformation may be necessary. The Kolmogorov-SMirnov test for normality for cholesterol does result in a significant p-value (p<0.01), but since we have such a large sample size, we will still proceed with linear regression. Bivariable Associations We hypothesize that age is related to cholesterol, with cholesterol increasing with increasing age. Since age is continuous, we can use it as a continuous predictor, and we may want to categorize it to help with visualization or interpretability. Treating age as continuous would lead us to look at a scatter plot between the two continuous variables, as well as estimate a correlation coefficient as a measure of association. We see that cholesterol does appear to increase as age increases, and this best fit line suggests a positive slope. The correlation coefficient between the two variables is 0.27. A correlation coefficient ranges from -1 to 1 with values closest to 0 indicating no relationship. The closer to 1 (or -1) the correlation coefficient is, the stronger the correlation. A correlation coefficient of 1 (or -1) would indicate perfect correlation - as demonstrated by all points falling along a single line. Values closer to 0 indicate no relationship and the graph would just appear to be a random cloud of points. The positive or negative sign of the correlation coefficient indicates if it is a positive or negative correlation. Positive correlation means that as one variable increases, so does the other, and negative means that as one variable increases, the other decreases. We could also group age into categories and look at the relationship. Here, we would calculate means per group, and could visualize the relationship with boxplots. agegrp Frequency Percent Cumulative Cumulative Frequency Percent <40 1877 36.03% 1877 36.03% [40-50] 1740 33.42% 3618 69.46% >=50 1591 30.54% 52.09 100.00% Analysis Variable: Cholesterol Std Lower Upper 25th 75th agegrp N Mean Dev 95% CL 95% CL Minimum Pctl Median Pctl Maximum for Mean for Mean <40 1819 213.18 41.97 211.25 215.11 115.00 183.00 209.00 235.00 534.00 [40-50] 1690 229.63 42.92 227.59 231.68 117.00 200.00 226.00 253.00 568.00 >=50 1548 241.73 45.49 239.46 244.00 96.00 210.00 238.00 270.00 425.00 We see that about a third of patients are in each age group (<40, 40 - 50, and 50 and older), and that for each increasing age group, the mean cholesterol is higher. For the boxplot, the box indicates the 25th, 50th (median), and 75th percentiles as the bottom, middle, and top of the box, respectively. The marker inside the box shows the mean, which is often close to the median for large sample sizes with normally distributed data. The whiskers extend out relative to the interquartile range, and data points that fall out of that limit are shown with dots. Since we are also interested in sex, we should summarize that vairable as well. Females have higher cholesterol on average than males, but only by about 2 points: Analysis Variable: Cholesterol Std Lower Upper 25th 75th sex N Mean Dev 95% CL 95% CL Minimum Pctl Median Pctl Maximum for Mean for Mean Female 2774 228.54 46.92 226.79 230.29 117.00 196.00 224.00 257.00 493.00 Male 2283 226.05 42.37 224.31 227.79 96.00 198.00 223.00 250.00 568.00 Modeling (Multivariable Associations) In order to look at the relationship of multiple variables with our outcome, we need to move to modeling. With a continuous outcome, we can use linear regression. First we want to see if the differences in cholesterol by age group are significant. Our model can then be fit with just age group as a covariate and we see: Analysis of Maximum Likelihood of Parameter Estimates Parameter DF Estimate Standard Wald 95% Confidence Wald Pr > ChiSq Error Limits Chi-Square Intercept 1 213.1781 1.0170 211.1848 215.1715 43934.9 <.0001 agegrp >=50 1 28.5525 1.4999 25.6127 31.4923 362.36 <.0001 agegrp [40-50] 1 16.4550 1.4655 13.5827 19.3273 126.07 <.0001 agegrp <40 0 0.0000 0.0000 0.0000 0.0000 The estimate for the difference in cholesterol between the oldest and youngest age group is 28.6 (which we can confirm from our earlier descriptive table), the CI for this estimate is (25.6 - 31.5), and the p-value is <0.0001, all clearly providing evidence that there is a significant difference in cholesterol between the oldest and youngest age groups. A similar conclusion is seen with significantly higher cholesterol in the middle age group compared to the younger - on average about 16.5 points higher. Next, we may want to see if this relationship still holds after controlling for sex. The model including both covariates in the model shows this: Analysis of Maximum Likelihood of Parameter Estimates Parameter DF Estimate Standard Wald 95% Confidence Wald Pr > ChiSq Error Limits Chi-Square Intercept 1 211.7290 1.2206 209.3367 214.1213 30090.1 <.0001 agegrp >=50 1 28.5721 1.4993 25.6336 31.5107 363.18 <.0001 agegrp [40-50] 1 16.4595 1.4648 13.5885 19.3305 126.26 <.0001 agegrp <40 0 0.0000 0.0000 0.0000 0.0000 Sex Female 1 2.6280 1.2252 0.2266 5.0293 4.60 0.0320 Sex Male 0 0.0000 0.0000 0.0000 0.0000 The estimates for differences by age group are still about the same: 28 points higher for oldest vs youngest age group, and 16 points higher for the middle vs youngest group, even after controlling for sex. Thus, it does not appear that sex is a confounder. This model is also consistent with the simple descriptives of cholesterol by sex that showed on average females have slightly higher cholesterol (about 2.5 points). Finally, we may want to investigate if sex is an effect modifier, and thus we also include the interaction term of agegrp*sex. The p-value for this is significant, and the model estimates show that these are the estimated means per group: female male <40 206.1 221.9 [40-50] 230.2 228.9 >=50 253.4 227.8 We can see that as age group increases, so does cholesterol, but much more dramatically in females. Thus age group is an effect modifier. Males have an average cholesterol around 220-230, and this does not seem to change with age. Females, on the other hand, have a greater change in cholesterol with increasing age. We can see this better by graphing the means by group and seeing that the mean cholesterol for males is mainly flat line, but the line connecting the means for females has a slope. 9.2 - Categorical outcome 9.2 - Categorical outcome From our example, we may be interested in the relationship of BMI with high blood pressure, and want to consider a possible confounder (or effect modifier) of sex. The outcome is high blood pressure and is a dichotomous value (either present or not). The predictors/covariates to be considered are BMI and sex. BMI can be either continuous or put into categories, and sex is a categorical variable. First, we want to report the percentage of patients who have high blood pressure with a 95% confidence interval. We find that 43.5% of patients report high blood pressure. The exact CI for this estimate is (42.2% - 44.8%), again very narrow due to the large sample size. high_BP Frequency Percent Cumulative Cumulative Frequency Percent low/normal 2942 56.48% 2942 56.48% high 2267 43.52% 5209 100.00% Bivariable Associations Next we want to look at the relationship with BMI, and can consider BMI as both continuous and categorical variables Table of BMIGrp by high_BP BMIgrp high_BP low/normal high Total [18/5-25] normal 1727 729 2456 (70.32%) (29.68%) [25-30] overwght 953 1017 1970 (48.38%) (51.62%) >= obese 188 506 694 (27.09%) (72.91%) Total 2868 2252 5120 Frequency Missing = 89 We see that as BMI level increases, so does the rate of high BP (30%, 52%, and 73% for increasing levels of BMI). We can use a chi-squared test here to test the association between the two variables. It is highly significant, and not surprisingly so, due to the large sample size. Considerations specifically related to Non-matched Case-Control Studies: 1. Chi-squared tests can be used for the bivariable association of exposure and outcome. If any cell counts are less than 5, Fisher’s Exact tests should be used instead. 2. If we want to evaluate potential effect modifiers using these types of bivariable association tables, we can use the Mantel-Haenszel statistic, which essentially breaks the exposure * outcome table up by potential effect modifier to evaluate if there are different effects for different strata. We can also look at a boxplot or histogram for the continuous version of BMI and see that on average, patients with high BP tend to have higher BMI compared to those without high BP. Analysis Variable: bmi Std Lower Upper 25th 75th high_BP N Mean Dev 95% CL 95% CL Minimum Pctl Median Pctl Maximum for Mean for Mean low/normal 2936 24.37 3.61 24.24 24.50 14.12 21.87 24.01 26.49 51.96 high 2263 27.15 4.48 26.97 27.34 15.77 24.05 26.73 29.52 56.68 We can use a two group t-test to compare the means by group, but it is often more streamlined to consider the modeling technique you plan to use, and use that for both bivariable and multivariable associations. The model with just a single covariate in the model will provide an unadjusted result, and the model with multiple covariates will provide an adjusted result. Modeling (Multivariable Associations) For a dichotomous outcome we may want to estimate odds ratios or risk ratios, and thus will use logistic or log-binomial regression, respectively. Logistic regression to estimate Odds Ratio: Using the table which shows the raw counts in each BMI group who have high BP, we can calculate the OR of high BP for the overweight vs normal BMI group as (1017*1727)/(729*953) = 2.52, and similarly for the obese vs normal groups as (506*1727)/(729*188) = 6.38. From the logisitic regression model, we get these same estimates, along with 95% CIs: Label Estimate Standard Confidence Limits (OR overwght vs. normal) 2.5281 0.1596 2.2339 2.8610 (OR obese vs. normal) 6.3761 0.6131 5.2809 7.6985 Considerations specifically related to Case-Control Studies: Remember that for non-matched case-control studies, OR must be calculated since the distribution of exposure is not necessarily representative of the population. The sampling fractions cancel out in the OR calculation, but not in the RR. These logistic regression models can be considered unconditional, which is appropriate for non-matched case control studies, but not MATCHed case control studies. For matched case control studies conditional logistic regression modeling should be used, and the OR is calculated based on concordant and discordant pairs. Log-binomial regression to estimate Risk Ratio: Using the table which shows the raw counts in each BMI group who have high BP, we can calculate the RR of high BP for the overweight vs normal BMI group as (1017/1970)/(729/2456) = 1.74, and similarly for the obese vs normal groups as (506/694)/1017/2456)) = 1.41. From the log-binomial regression model, we get these same estimates, along with 95% CIs: (note that these are the unadusted RR, which can also be calculated just from the table in the previous section). Note that if the log-binomial model does not converge, modified poisson regression modeling can be used. Label Estimate Standard Confidence Limits (RR overwght vs. normal) 1.4536 0.0267 1.3794 1.5317 (RR obese vs. normal) 2.5958 0.0636 2.2914 2.9406 As stated earlier, we often want the RR, so we’ll proceed with those estimates. But notice how the RR are less extreme than the OR, which is often the case. And if readers don’t know the distinction between OR and RR, and assume the OR can be interpreted as the RR, they will incorrectly overestimate the difference in risk between groups. If we want adjusted RR, we can simply add the other covariates to the model. In this case we want to see if sex is a possible confounder or effect modifier. Adding sex to the model, does not meaningfully change the RR based on BMI (the estimates are essentially the same), thus sex is not a confounder. • RR overwght v normal = 1.44 • RR for obese v normal = 2.59 The model also shows that sex is a significant predictor of blood pressure. (In the unadjusted setting, we see that the rates of high BP in males is 46% compared to 41% in females - not a huge clinical difference). The adjusted RR for female v males is 1.06 (95% CI: 1.01 - 1.11). Suggesting a small increased risk of high BP for females compared to males. To evaluate sex as a potential effect modifier, we can include an interaction term in the model. Doing so shows no statistical evidence of an interaction, thus we can assume the relationship between BMI and high blood is similar for both males and females. If the interaction had been significant, the next step would be to provide stratified analyses, where we estimate RR estimates for BMI with high blood separately for females and males. 9.3 - Time-to-event outcome 9.3 - Time-to-event outcome Examples of time-to-event data are: • Time to death • Time to development of a disease • Time to first hospitalization • And many others One may think that time-to-event data is simply continuous, but since we do not observe the true time for each person in the dataset, this is not the case. The people who do not experience the event still contribute valuable information, and we refer to these patients as “censored”. We use the time they contribute until they are censored, which is the time they stop being followed because the study has ended, they are lost to follow-up, or they withdraw from the study. For our example, we are interested in the time to development of coronary heart disease (CHD). No patients had CHD upon study entry, and patients were surveyed every 2 years to see if they had developed CHD. Each patient’s “time-to-CHD” will fall into one of these categories: 1. They develop CHD within the 30-year study period Time = years until they develop CHD Status = event 2. They do not develop CHD within the 30-year study period, and they stay in the study until the end Time = 30 years Status = censored 3. They do not develop CHD within the 30-year study period, and they leave the study before the 30-year study period is finished (due to death, moving, lost contact, voluntarily withdraw, etc.) Time = time on study Status = censored The best way to describe time-to-event data is by the Kaplan-Meier method. This uses information from all patients, and differentiates between patients who did and did not experience the event. A Kaplan Meier (KM) plot is how we visualize time-to-event data and starts with all patients being event-free at time 0. The KM method uses the number of patients still at risk over time, and patients drop out once they experience the event or are censored. A Kaplan Meier plot and a Cumulative Incidence plot are inverses of each other, so you can choose which best fits your data. Often for “Overall Survival” we use KM plots, which start at 100%, and decrease over time as patients either die or are censored. This can really be considered as plotting the percentage of patients still alive. For our example, it makes more sense to look at a cumulative incidence plot, which starts at 0% and shows how the incidence of CHD increases over time. (A KM plot would plot the percent of people who are CHD-free, and this would decrease over time.) This plot shows that over time CHD is increasing, and we can get estimates of rates of CHD at different time points using the KM estimate. When comparing time-to-event data between groups, we can use the KM method again, as well as perform a log-rank test. For our example, suppose we want to compare time to CHD by BP status. This plot shows that those with high BP at study entry (blue line) have higher rates of CHD than those with low or normal BP (red line). The KM estimates of CHD at 10 years are 12.7% for the high BP group and 4.7% for the low/normal group. At 20 years, these estimates are 26.1% and 12.0%. The log-rank test is essentially a comparison of lines, not specifically comparing estimates at any single point, and is highly significant here (p<0.0001). Modeling (Multivariable Associations) We can use Cox Proportional Hazards modeling to estimate the hazard ratio. This model uses the hazard function which is the probability that if a person survives to time t, they will experience the event in the next instant. Just from eyeballing the previous plot, it appears that the risk of CHD is about twice as high for those with high BP compared to those with low/normal. Actually fitting a Cox model with high BP as a single covariate shows that the estimated hazard ratio is 1.87 (95% CI: 1.69 - 2.08), which fits with what we see in the plot. The Cox models can also include multiple covariates to test for confounding and interaction terms to evaluate effect modification, similar to those in previous sections. With additional terms in the model, we can estimate adjusted hazard ratios. 9.4 - Summary 9.4 - Summary When planning analyses for a study it is important to be clear about what type of data you’ll have. Once you know if the outcome measure is continuous, categorical, or time-to-event, you can choose the appropriate methods. Understanding your data is very important, so do not skip the step of looking at descriptive statistics first, including looking at distributions and graphs whenever possible. Next, you can start to look at associations between variables (bivariable) to get a sense of how variables relate to one another. This step can and should also use graphs and tables to visualize data whenever helpful. Once these relationships are understood, modeling techniques can be used. Models allow for both unadjusted and adjusted estimates to be calculated and can include more than one covariate. Modeling can be used to evaluate potential confounding along with effect modification.
{"url":"https://online.stat.psu.edu/stat507/book/export/html/764","timestamp":"2024-11-12T08:28:42Z","content_type":"text/html","content_length":"53282","record_id":"<urn:uuid:079d76d1-41ac-46f0-bd84-5b7927093faf>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00549.warc.gz"}
Maths Std 6 Ch 9 MCQ Que Ans Maths Science Corner NCERT Maths (Ganit) Standard 6 (Class 6) (Dhoran 6) Chapter 09 Mahiti Nu Niyaman (Data Handling) MCQ Question Answer Pdf and Video On Maths Science Corner you can now get new NCERT 2020 Gujarati Medium Textbook Standard 6 Maths Chapter 09 Mahiti Nu Niyaman (Data Handling) Swadhyayn Pothi (Work Book) Solution in Video format for your easy reference. On Maths Science Corner you will get all the printable study material of Maths and Science Including answers of prayatn karo, Swadhyay, Chapter Notes, Unit tests, Online Quiz etc.. This material is very helpful for preparing Competitive exam like Tet 1, Tet 2, Htat, tat for secondary and Higher secondary, GPSC etc.. Highlight of the chapter 9.1 Introduction 9.2 Recording Data 9.3 Organization of data 9.3.1 Frequency distribution table with the help of tally marks 9.4 Pictograph 9.5 Interpretation of pictograph 9.6 Drawing a Pictograph 9.7 A bar graph 9.7.1 Interpretation of a bar graph 9.7.2 Drawing a bar graph 9.8 Summary You will be able to learn above topics in Chapter 09 of NCERT Maths Standard 6 (Class 6) Textbook chapter. Earlier Maths Science Corner had given Completely solved NCERT Maths Standard 6 (Class 6) Textbook Chapter 09 Mahiti Nu Niyaman (Data Handling) in the PDF form which you can get from following : Mathematics Standard 6 (Class 6) Textbook Chapter 9 in PDF Format Today Maths Science Corner is giving you the Video Lecture of Chapter 09 Swadhyayn Pothi (Work Book) Solution in the form of Video Lectures of NCERT Maths Standard 6 (Class 6) in Video format for your easy reference. Mathematics Standard 6 Chapter 9 (Ganit Dhoran 6 Prakaran 9) Mahiti Nu Niyaman (Data Handling) MCQ Question Answer Pdf Mathematics Standard 6 Chapter 9 (Ganit Dhoran 6 Prakaran 9) Mahiti Nu Niyaman (Data Handling) MCQ Question Answer Pdf In this Video you will be able to learn How to arrange numbers in ascending or descending order and learn Indian and International Number system. You can get Standard 6 Material from here. You can get Standard 7 Material from here. You can get Standard 8 Material from here. You Can get Standard 9 Material from here. You can get Standard 10 Material from here. No comments:
{"url":"https://www.mathssciencecorner.com/2022/11/maths-std-6-ch-9-mcq-que-ans.html","timestamp":"2024-11-12T16:09:42Z","content_type":"application/xhtml+xml","content_length":"145294","record_id":"<urn:uuid:2fa45163-6225-49ac-b672-43a1b65357d7>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00569.warc.gz"}
Circuit Breaker Halt Indicator for ThinkorSwim - useThinkScript Community Last edited by a moderator: Join useThinkScript to post your question to a community of 21,000+ developers and traders. Alright I created the Indicator. It took a long time so feel free to send a if you find it useful. Halt Indicator V2 EDIT:Use this one, the code has been updated to fix a small issue on stocks under .75 https://tos.mx/AQBubMo Source code: #Created By Brent Vogl (Wannaspeed, Brent V) #Do not Distribute without Creator's Permission #Do you love my indicator? Help me out with a paypal donation by copy/Paste the link below! #Paypal Donate: https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=4CRL8BNAESUHW&source=url #Date created 11/19/19 #Version 2.1 #Shows Estimated Halt High or Halt Low ################ Inputs (choose what you want to display) Input showPriorclose = yes; Input showFiveMinutePrice = yes; Input showHaltHighPlot = yes; Input showHaltLowPlot = yes; #input lowLimit = 0.75; #input highLimit = 3; ################ Prior Close Price def aggregationPeriod = AggregationPeriod.DAY; def displace = -1; def Priorclose =close(period =aggregationPeriod)[-displace]; ################ Definitions def isopen=if secondsFromTime(0945)>=0 and secondstillTime(1535)>=0 then 1 else 0; #def beforeStart = GetTime() < RegularTradingStart(getYYYYMMDD()); #def isopen=if secondsFromTime(0930)>=0 and secondstillTime(0945)>=0 and secondsFromTime(1535) and secondsTillTime(1600) then 1 else 0; def FiveMinPrice = close(period = AggregationPeriod.min)[5]; Def HaltHigh = (if isopen then FiveMinPrice * 1.10 else FiveMinPrice *1.20); Def HaltLow = if isopen then FiveMinPrice / 1.10 else FiveMinPrice /1.20; Def HaltHighBetween = (if isopen and priorclose between .75 and 3 then FiveMinPrice * 1.20 else FiveMinPrice *1.40); Def HaltLowBetween = if isopen and priorclose between .75 and 3 then FiveMinPrice / 1.20 else FiveMinPrice /1.40; Def HaltHighUnder = if isopen and priorclose is less than 0.75 then FiveMinPrice * 1.75 else FiveMinPrice *2.5; Def HaltLowUnder = if isopen and priorclose <= 0.75 then FiveMinPrice / 1.75 else FiveMinPrice /2.5; Def HaltHigh15 = if isopen and priorclose <= 0.75 then fiveMinPrice + 0.15 else fiveMinPrice + 0.30; Def HaltLow15 = if isopen and priorclose <= 0.75 then fiveMinPrice - 0.15 else fiveMinPrice - 0.30; ################ Labels & Plots Addlabel (showPriorclose, + Priorclose, color.dark_GRAY); Addlabel (ShowFiveMinutePrice, + FiveMinPrice, color.DaRK_GRAY); Addlabel (Priorclose >=3, HaltHigh, color.light_GREEN);#For stocks over $3 #Addlabel (yes, HaltHigh, color.light_GREEN); Addlabel (Priorclose between 0.75 and 3, HaltHighBetween, color.light_GREEN);#For stocks between .75-3 Addlabel (Priorclose between 0.75 and 3, HaltLowBetween, color.pink);#For stocks between .75-3 Addlabel (Priorclose >=3, HaltLow, color.pINK); #Fors stocks over $3 #Addlabel (Priorclose <=0.75, HaltHighUnder, color.light_Green); #Fors stocks under .75 Addlabel (Priorclose <=0.75, if HaltHighUnder > HaltHigh15 then HaltHigh15 else haltHighUnder, color.light_Green); #Stocks under .75 #Addlabel (Priorclose <=0.75, HaltLowUnder, color.pINK); #Fors stocks under .75 Addlabel (Priorclose <=0.75, if HaltLowUnder < HaltLow15 then HaltLow15 else haltLowUnder, color.pink);#Stocks Under .75 ###check #Addlabel (yes, HaltLow, color.pINK); plot HHighOver3 = if ShowHalthighPlot > 0 and Priorclose >=3 then Halthigh else Double.NaN; plot HLowOver3 = if ShowHaltLowPlot > 0 and Priorclose >=3 then HaltLow else Double.NaN; plot HhighBEtween = if ShowhaltHighPlot > 0 and between(PriorClose, .75, 3) then HaltHighBetween else Double.NaN; #plot HHighBetween = if ShowhaltHigh > 0 and priorclose >= lowLimit and priorclose <= highLimit then HaltHighBetween else Double.NaN; plot HLowBEtween = if ShowhaltLowPlot > 0 and between(PriorClose, .75, 3) then HaltLowBetween else Double.NaN; #plot HHighUnder = if ShowhaltHighPlot > 0 and Priorclose <= .75 then HaltHighUnder else Double.NaN; plot HHighUnder = if ShowhaltHighPlot > 0 and Priorclose <= 0.75 then Min(HaltHighUnder, HaltHigh15) else Double.NaN; #plot HLowUnder = if ShowhaltLowPlot > 0 and Priorclose <= 0.75 then HaltLowUnder else Double.NaN; plot HLowUnder = if ShowhaltLowPlot > 0 and Priorclose <= 0.75 then Max(HaltLowUnder, HaltLow15) else Double.NaN;###Check It only works on a 1 minute chart, because of limitations with the aggregation period. I don't know if there's a different way to find out the price for 5 minutes ago on other chart time periods. but if anyone knows how I can try to implement it. By default it shows yesterday's close price, and the close price from 5 mins ago. These can be turned off. It has an upper and lower halt indicator in light green and pink, It also shows an upper/ lower Plot by default, which can also be turned off individually. Just because the price touches or overlaps the halt price does not mean the stock will halt, it has to stay at that level for 15 seconds. Also the stock may halt if there's huge volatility between the bid and the ask. This Indicator only compares the close price 5 mins ago to the current price, so it's only a guideline (though it should be pretty accurate). This indicator will not work for tier 1 stocks, though I don't think this is too necessary because they rarely have Volatility Halts. I may try to implement a daily average volume parameter that factors the lower 5/10% for tier 1 stocks at a later date, but its low priority for me because I rarely trade them. This Indicator does take into account the first 15 minutes of the day and the last 25 minutes, and increases the ranges respectively. It also factors in the price (over $3, Between .75-3 and under .75) and will do the lesser of either 15/30 cents or 75%/150% for stocks under .75. It does continue to show the Halt prices, and plots for Premarket and Post market although there are no halts during this time. I will try to implement a toggle to disable for Aftermarket hours. I have no idea the characteristics of the Halt indicator during the first 5 minutes of market open. I'm not sure how the SEC handles it either... (is it based on Open price or premarket prices? This indicator will be based on premarket data. Though Im sure it could be adapted if it needs different data during this period. Anyway, if you have any questions, tips, or notice any bugs or odd behavior feel free to let me know. Last edited: Last edited by a moderator: Well it seems to work well, its accurately detected 2 halts so far this morning. Only thing I noticed is the bars wont be accurate for 5 minutes after a halt, since TOS doesn't count the time halted as 1 min bars. I'm not sure how to get around that. Only thing I could think of is if volume is 0 for 3 minutes then start counting from last price until volume resumes and 5 minutes passes. Unfortunately my thinkscript coding skills at the moment are not advanced enough to implement that sort of condition. Last edited: for creating this Just so i get it right, the price needs to touch and hold the halt price shown in the labels/plot for at least 15 seconds but only if it happens within the current 1min bar right? Or do i misunderstand it then? for creating this Just so i get it right, the price needs to touch and hold the halt price shown in the labels/plot for at least 15 seconds but only if it happens within the current 1min bar right? Or do i misunderstand it then? There's a certain amount of leeway around the lines, but yes once the current price starts touching the indicator line, and maybe in some cases is close to it you need to start thinking of a potential halt. Since the indicator is only updated every minute and not real time as in the case of the SEC triggers its only a close guideline and there will be slight variances. But the general idea is if a stock moves X % in a rolling 5 minute period and stays above or below X for 15 secs it will halt. I hope that clears it up. EDIT: If you look at ARAV has halted 2 times so far today, there was a couple times it dipped into the halt indicator but wasn't halted. And comparing it to the other Halt stock I posted earlier, it got deeper into the line before a halt. The Blue arrows are areas where it touched or crossed the halt line but didn't halt. I think there's more flexibility near market open because there's a lot of volatility and price discovery. The purple arrows are where it did halt. Keep in mind right after a halt the information won't be very accurate for about 5 mins. The orange arrows are where the stock played around in the indicator but not long enough, and the last orange arrow I attribute to the delayed nature of the indicator. When I looked at a 10 second chart the price fluctuations from the previous 5 minutes put it between 9-10% depending on which part of the candle you look at. Which is definitely getting close to being halted but apparently not quite enough. I think the SEC must take into account a bit more than ONLY price movement. In the end it's a tool that can only signal a potential halt, not a guaranteed halt. And I'm sure with more experience with the indicator and halted stocks the behavior will be more easily identified by the end user. Last edited: I had to tweak the code on the indicator for stocks that closed under .75 cents. basically just had to swap < for > in a couple areas because it was choosing the wrong formula to plot/indicate. The updated Indicator is here Also updated the original post source code and added link to the new Indicator. Also if anyone could help with code to make the indicator stop using 5 minutes after a halt resumes that would be appreciated. I was thinking the indicator could check volume, and if less than 0 use last price. then once volume > 0 continue using halt resume price for 5 minutes then switch back to the normal previous 5 minute price. Or if anyone has a better idea, i'm all ears. Basically what happens is when a stock halts thinkorswim stops counting ticks as minutes so it's still using the price from before the halt, but it should be using the halt resumed price until 5 mins passes. The price difference from 5 minutes before a halt and right after a halt can be significant. EDIT: On second thought, I'm not sure if volume would work, or tickcount or whatever since it would just look at the last candle for the data. Hmmm, not sure how to make it detect a halt..... unless it's something like if tick count, volume or whatever is unchanged for 3 minutes then wait until tick changes. Not sure if that's possible? Any ideas? Last edited: , this is amazing and much appreciated, thanks! I have yet to trade it/make money with it, but would love to make a donation once I do In the mean time and in order to get there (and hopefully other participants to this thread) ; I wanted to ask if anyone knew how to setup a custom scanner to narrow down a list of potential stocks that are good candidates to be halted shortly (hence the 8%, and 16 % below respectively instead of the 10 % and 20% at which the stocks would be halted).. so as to have a look at these stocks from the watchlist using your indicator above on the chart... a simplified version of the conditions would be : Scan for either: 1) last price is higher or equal to $3 AND the high of the current 1 min candle is at least 8% higher than the low of any of the last 4 candles (1 min) 2) last price is lower then $3 AND the high of the current 1 min candle is at least 16% higher than the low of any of the last 4 candles (1 min) Appreciate any hints or help you might be able to provide.. Thanks and kind regards! Thanks for the positive feedback. I plan at some point to add a visual for the amount the indicator is looking for with each stock. So if it's a 10% move or 20% 15 cents etc. This way immediately after a halt it can at least be somewhat manually calculated. I already pretty much know for myself the amount since I had to do a lot of research on order to create the indicator but I think it would be helpful for others that aren't as familiar with what causes stocks to halt. As far as a scanner to find stocks about to halt, I usually find them with momentum scanners, but I think it's doable to enter a criteria that will find them within TOS. It could be closely built on my scanner settings but instead of 15 cents,10, 20, 40% move in 5 mins it could be like 12 cents 7, 15, 35%. Something close enough to find the stocks moving but not kick out too many stocks. The only bad thing about TOS scanners is when using custom scans they have a delay, and from what I've heard the only way around it is to spam the scan button and even then you have delay from when it populates and from when you can scan again. IMO using TOS to scan for fast moving stocks isnt ideal, there are much better (faster) scanners out there. I'm new to thinkscript so my code might not be ideal, but I wrote a script to calculate how many minutes each halt was. The code below will display a bubble on bars to show how many minutes the previous bar was halted. I believe it may help calculate the 5 minutes after a halt in 's script. Unfortunately, it's beyond my ability to combine them, but hopefully someone more skilled can make it work. Btw, this can be modified to show the halt on the actual bar the halt happened if interested, but I thought the current code might apply more directly to fixing the 5 minutes post-halt. Edit: see this code below instead to show likely halts on the bar it actually happened. # Halt length detector by LennyP (Some lines credit to tomsk & RobertPayne) # Indicates how many minutes the halt on previous bar was # Works on 1 min chart only declare hide_on_daily; input Starttime = 0930; input Endtime = 1600; def Active = SecondsFromTime(Starttime) >= 0 and SecondsTillTime(Endtime)-1 >= 0; def Today = GetDay() == GetLastDay(); def isMinuteAgg = (GetAggregationPeriod() == AggregationPeriod.MIN); def Qualifies = Active and Today and isMinuteAgg; def MinFromStart = rounddown(SecondsFromTime(Starttime)/60,0); def minutesHalted = if Qualifies and MinFromStart > MinFromStart[1] + 1 #if current bar > 1min from previous bar then MinFromStart - MinFromStart[1] #difference in minutes else 0; AddChartBubble(if Qualifies and MinFromStart > MinFromStart[1] + 1 then 1 else 0, high, minutesHalted + " min Halt", Color.orange, yes); Last edited: Using the script above, I also noticed some stock charts have what appear to be 2 minute halts, occasionally other odd lengths. It doesn't seem to be an actual halt, but there are missing minutes in these charts. For example one bar might be at 10:30am and the next will be at 10:32am. I am confused by this. Not sure if its a bug in thinkorswim, or there is some other reason. But I assume the code above may also help in those situations. I'll attach an image as an example: That's true. The 2 minute gaps seem to be happening on low volume stocks, though I am curious as to why it happens. I would be interested in a modified version showing the halt bubble on the actual bar it happened. I would be interested in a modified version showing the halt bubble on the actual bar it happened. Sure thing. I also added a few more options edit - ver 5 (5/3/20): redefined a halt as any gap between bars of 5 mins or more & minor adjustments edit - ver 4: added painting downbar and upbar halt candles different colors and simplified some code # Halt length detector ver. 5 by LennyP (Some lines credit to tomsk & RobertPayne) # Indicates likely halts and how many minutesHalted it was (after halt has ended) # Works on 1 min chart only #hint Show_Bubbles:Show bubbles above likely halts #hint Halt_Bubble_Text:Text in bubble following number of minutes #hint Paint_Halt_Bars:Paint_Halt_Bars bars of likely halts different colors declare hide_on_daily; declare once_per_bar; input Show_Today_Only = Yes; input Show_Bubbles = Yes; input Halt_Bubble_Text = "min"; input Paint_Halt_Bars = No; input Starttime = 0930; input Endtime = 1600; def Active = SecondsFromTime(Starttime) >= 0 and SecondsTillTime(Endtime) - 1 >= 0; def Today = GetDay() == GetLastDay(); def isMinuteAgg = (GetAggregationPeriod() == AggregationPeriod.MIN); def MinFromStart = RoundDown(SecondsFromTime(Starttime) / 60, 0); def Qualifies = if Show_Today_Only then Active and Today and isMinuteAgg else Active and isMinuteAgg; def minutesHalted = if Qualifies and MinFromStart[-1] > MinFromStart + 1 #if next bar > 1min from current bar then MinFromStart[-1] - MinFromStart #difference in minutes else 0; def LikelyHalt = minutesHalted >= 5; AddChartBubble(Qualifies and Show_Bubbles and LikelyHalt, high, minutesHalted + Halt_Bubble_Text, Color.YELLOW, yes); assignPriceColor (if Qualifies and Paint_Halt_Bars and LikelyHalt and close > open then Color.light_green else if Qualifies and Paint_Halt_Bars and LikelyHalt and close < open then color.pink else Color.CURRENT); Last edited: Sure thing. I also added a few more options for the modified version! I tried it today and for some reason, when there is a halt, the chart bubble does not appear right away. From what i could see, the bubble appeared after the un-halt so was wondering if you have noticed the same thing? Was also wondering if you could add an option to paint the current halt bar in a custom color instead of the chart bubble next to the halt bar? Sure no problem. Oh yeah, I forgot to mention that. The way it works, it looks for a gap of 2 mins or more between bars (should only be 1 min on 1 min chart). It doesn't know there was a halt until after a new bar starts and it counts 2 or more mins, so unfortunately it doesn't indicate it until then. I don't think it's possible to detect it on a live bar, but I am still learning thinkscript so I hope I'm wrong and someone else can do it. Btw on some charts there are many gaps of odd times like 2 mins, 3mins, 8mins etc. This script considers these "questionable halts" because as I understand, real halts should be 5, 10 or 15min. exactly in most cases. I honestly have no idea why these multi-minute gaps are on these charts, and hopefully someone more knowledgeable can inform us. They can be hidden, though some of them may be 5 or 10 min and not be a real halt, so consider this script just a guess, and mostly right about the real halts. I have update the code above to add bar painting and bubbles as options. Btw on some charts there are many gaps of odd times like 2 mins, 3mins, 8mins etc. This script considers these "questionable halts" because as I understand, real halts should be 5, 10 or 15min. exactly in most cases. I honestly have no idea why these multi-minute gaps are on these charts, and hopefully someone more knowledgeable can inform us. They can be hidden, though some of them may be 5 or 10 min and not be a real halt, so consider this script just a guess, and mostly right about the real halts. They are not halts, just periods when no trades occurred. That makes sense, thanks. On another note, I've been trying to combine my halt detector with Wannaspeed's indicator, so that it can try to get the 5 minutes after a halt more accurate. My thinkscripting is just okay but improving (still a novice) but the real hurdle hasn't been coding. It's trying to understand what exactly the reference price is after a halt or on opening. The limit bands are based off of this reference. Lines like this throw me off: The first Reference Price for a trading day is the Opening Price on the Primary Listing Exchange if such price occurs less than five minutes after the start of regular trading hours. If the Opening Price on the Primary Listing Exchange does not occur within five minutes after the start of trading hours, the first Reference Price is the arithmetic mean of eligible reported transactions over the preceding five minutes. Being a newbie, I'm wondering how to determine if there is no opening price. My assumption is to check for no volume in the opening minutes, but I'm not sure if that's the meaning. Then, if no volume, what exactly is the arithmetic mean of those 5min of transactions? it says Simple Moving Average (SMA) is basically the arithmetic mean of preceding prices on a specified time period So I thought I just need a moving average during the first 5 minutes, but I learned that a moving average only averages the close of each bar. It doesn't seem to be an average of all of the transactions as described in the LULD FAQ. I'm wondering if a closer approach would be to use the midpoint of each bar and then average those numbers out during the first 5 minutes. Actually, the same "arithmetic mean of eligible transactions" terminology is used to describe how to calculate the reference price throughout the whole trading day. If anyone has any insight on how get get a more accurate reference price, that would be great, or else I'll just do my best to be close. Btw, in studying the documentation, I found that the formula has changed a bit since Wannaspeed posted his script. There was an "Amendment 18" that apparently took effect Feb 2020 that removed the doubling of the limit bands from 9:30 to 9:45, and some of the doubling from 3:35 till close. is some info for those interested or just google "LULD Limit Up Limit Down" I'll attach some sample screenshots here: If I manage to complete this script successfully, it will include the updated price bands, unless a better coder can beat me to it. Here is my code for an updated Halt Estimate Indicator for Tier 2 Stocks. Thanks to for the original solution! Btw I am only a novice so my coding may not be following best practices, but I spent a lot of time trying to follow the LULD Rules to my ability & it seems to work pretty well. Please improve/fix anything that needs it. This has been updated for the new February 2020 Limit Up Limit Down halt rules. They eliminated the doubling of the bands from 9:30 to 9:45, and also changed the doubling rules slightly from 3:35-4pm. I tried to follow all of the halt rules, and it also works during the 5 minutes after a halt. It is somewhat future proof in case they change the rules again. The percentages, hours, and price ranges are adjustable near the top of the code. So, for example, there is now no doubling in the first 15min of opening, but if they change that back, you can just type in the new changeover time for "normal_Hours_Start". The "930" could be changed back to "945" and the new percentages for the opening period would need to be changed below. You can also update price boundaries (the changeover price for what they consider a low, mid, or high priced stock) if that changes. Because of limits of thinkscript data, there are limits to accuracy of the estimated halt price. Here are some of the major obstacles I encountered: (1) Determining the average price for transactions over the last 5 minutes - We don't (to my knowledge) have access to the real average price a stock traded at in a given 1-minute bar, so I estimate it as (high+low)/2, the midpoint of the bar. For example, if a bar has 95% of its transactions near the low, and only 5% near the high, the estimate will be off. (2) 30 Seconds Reference Price rule: "The reference price will only be updated if the new reference price is at least 1% away in either direction from the current. Each new Reference Price shall remain in effect for at least 30 seconds." - We only have access to minute chart data, so reference prices that are held for 30 seconds are impossible to get exactly right. We could be updating the reference when we shouldn't or vice-versa. For example if a new bar starts with a huge move, we don't know if we are within the 30 second hold. If we are, the current price band should stay put. If not, it should update based on the current bar's movement (in addition to the previous 5 min of bars). (3) Previous 5 Minutes - The definition of previous 5 minutes seems to be a live rolling 5 minute period. Since we can access only full minute bar data, this won't be totally accurate either. Given these limitations, for each bar there are 2 estimated halt prices calculated for the upper band and 2 for the lower band. One uses the midpoint of the current bar in the calculation, and the more conservative one assumes the current bar's price spent more time inward, closer to the reference. Other considerations are also calculated. There are 3 display modes: Range - Shaded area between the two calculated halt estimates of each band Line - A line of the conservative halt estimate for each band Area - Shades the entire area above the upper conservative band, and below the lower conservative band I personally prefer Range. With it I have a better idea of where a halt is likely to occur. Remember this indicator is just an estimate. A halt may (and will) sometimes occur before the price reaches a band, or it may not occur even though price has reached a band. It seems a halt also requires the price to touch the real band for 15 seconds for a halt to occur, so that may cause some misses too. Btw the bubbles showing actual halts are a separate script I created . It only shows the length of a likely halt after a new bar after the fact. I like using them both together. 1-MINUTE CHART ONLY TIER 2 STOCKS ONLY. Not accurate on tier 1 (S&P 500, Russell 1000 stocks) # LULD Halt Estimate Indicator 1.01 by Lenny # - various code concepts from multiple posts at Usethinkscript.com # v1.0 5/2/20 -original # v1.01 6/9/20 -minor fix in situation where there is no lower band # 1-MINUTE CHART ONLY # TIER 2 STOCKS ONLY. Not accurate on tier 1 (S&P 500, Russell 1000 stocks) # Works with 3 (or less) stock price ranges (Low, Mid, High) # Works with 3 (or less) time periods for each price range (Open, Normal, Close) # (If opening and/or closing hours are not used, enter same time and boundries as normal hours) declare hide_on_daily; #================= INPUTS =================# input display_Mode = {default Range, Line, Area}; input show_today_only = yes; input show_labels = yes; # Hour ranges (enter duplicate of normal hours if no opening or closing hours difference) input open_Time = 0930; input normal_Hours_Start = 0930; input normal_Hours_End = 1535; input close_Time = 1600; # Adjust bands very slightly inward for margin of error # (by default 1% of the difference between ref price and band) def Estimate_Adjustment_Percent = 1; # 1% recommended #========== BOUNDRIES AND LIMITS ==========# # Price range boundries (Low/Mid, Mid/High priced stocks) def LowPriceMax = .75; def MidPriceMax = 3.00; # Low-priced stocks percentage / dollar(alt) limit bands def LowPriceOpenPercent = 75; # Will choose lower of percent and dollar value def LowPriceOpenAlt = .15; def LowPriceNormalPercent = 75; # Will choose lower of percent and dollar value def LowPriceNormalAlt = .15; def LowPriceClosePercentUp = 150; # upper and lower have different rules for low priced stocks (currently) def LowPriceCloseAltUp = .30; # Will choose lower of percent and dollar value def LowPriceClosePercentDown = 0; # 0 = no limit, no lower band needed (currently) def LowPriceCloseAltDown = 0; # 0 = no limit, no lower band needed (currently) # Mid-priced stocks percentage limit bands def MidPriceOpenPercent = 20; def MidPriceNormalPercent = 20; def MidPriceClosePercent = 40; # High-priced stocks percentage limit bands def HighPriceOpenPercent = 10; def HighPriceNormalPercent = 10; def HighPriceClosePercent = 10; # Minimum reference price change (%) def RefMinChangePercent = 1; # must move 1% to move bands (LULD rules) #================= DEFs =====================# def Active = SecondsFromTime(open_Time) >= 0 and SecondsTillTime(close_Time) - 1 >= 0; def Today = GetDay() == GetLastDay(); def isMinuteAgg = (GetAggregationPeriod() == AggregationPeriod.MIN); def Qualifies = if show_today_only then Today and Active and isMinuteAgg else Active and isMinuteAgg; def FirstBar = Active == 1 and Active[1] == 0; def CurrentMinute = RoundDown(SecondsFromTime(open_Time) / 60, 0) + 1; def MinFromPrevBar = CurrentMinute - CurrentMinute[1]; def PriorClose = close(period = AggregationPeriod.DAY)[1]; def lastBarClose = close[1]; def ExtendBar1 = FirstBar[1] and IsNaN(close) and !IsNaN(close[1]); #1st bar bands will extend to right so 1st bar is visible live def LowPriced = PriorClose < LowPriceMax; def MidPriced = PriorClose >= LowPriceMax and PriorClose <= MidPriceMax; def HighPriced = PriorClose > MidPriceMax; def OpeningHours = SecondsFromTime(open_Time) >= 0 and SecondsTillTime(normal_Hours_Start) > 0; def NormalHours = SecondsFromTime(normal_Hours_Start) >= 0 and SecondsTillTime(normal_Hours_End) > 0; def ClosingHours = SecondsFromTime(normal_Hours_End) >= 0 and SecondsTillTime(close_Time) > 0; #======= CALCULATE REFERENCE ESTIMATES =======# def vol = volume; def histAvgPrice = hl2; def histAvgPriceVol = histAvgPrice * vol; def Bar0AvgPriceUp = (open + low) / 2; # Bar0 = current bar def Bar0AvgPriceVolUp = Bar0AvgPriceUp * vol; def Bar0AvgPriceLw = (open + high) / 2; def Bar0AvgPriceVolLw = Bar0AvgPriceLw * vol; def Bar0AvgPriceHL2 = hl2; def Bar0AvgPriceHL2Vol = Bar0AvgPriceHL2 * vol; def Bar1AvgPriceVol = histAvgPriceVol[1]; def Bar2AvgPriceVol = histAvgPriceVol[2]; def Bar3AvgPriceVol = histAvgPriceVol[3]; def Bar4AvgPriceVol = histAvgPriceVol[4]; def Bar5AvgPriceVol = histAvgPriceVol[5]; def Bar0Within5Min = 1; def Bar1Within5Min = CurrentMinute[1] >= CurrentMinute - 4 and CurrentMinute[1] > 0; def Bar2Within5Min = CurrentMinute[2] >= CurrentMinute - 5 and CurrentMinute[2] > 0; def Bar3Within5Min = CurrentMinute[3] >= CurrentMinute - 5 and CurrentMinute[3] > 0; def Bar4Within5Min = CurrentMinute[4] >= CurrentMinute - 5 and CurrentMinute[4] > 0; def Bar5Within5Min = CurrentMinute[5] >= CurrentMinute - 5 and CurrentMinute[5] > 0; def totalAvgPriceVolUp = (Bar0AvgPriceVolUp * Bar0Within5Min) + (Bar1AvgPriceVol * Bar1Within5Min) + (Bar2AvgPriceVol * Bar2Within5Min) + (Bar3AvgPriceVol * Bar3Within5Min) + (Bar4AvgPriceVol * Bar4Within5Min) + (Bar5AvgPriceVol * Bar5Within5Min); def totalAvgPriceVolLw = (Bar0AvgPriceVolLw * Bar0Within5Min) + (Bar1AvgPriceVol * Bar1Within5Min) + (Bar2AvgPriceVol * Bar2Within5Min) + (Bar3AvgPriceVol * Bar3Within5Min) + (Bar4AvgPriceVol * Bar4Within5Min) + (Bar5AvgPriceVol * Bar5Within5Min); def totalAvgPriceVolHL2 = (Bar0AvgPriceHL2Vol * Bar0Within5Min) + (Bar1AvgPriceVol * Bar1Within5Min) + (Bar2AvgPriceVol * Bar2Within5Min) + (Bar3AvgPriceVol * Bar3Within5Min) + (Bar4AvgPriceVol * Bar4Within5Min) + (Bar5AvgPriceVol * Bar5Within5Min); def totalVol = (vol * Bar0Within5Min) + (vol[1] * Bar1Within5Min) + (vol[2] * Bar2Within5Min) + (vol[3] * Bar3Within5Min) + (vol[4] * Bar4Within5Min) + (vol[5] * Bar5Within5Min); def vwapUp = totalAvgPriceVolUp / totalVol; def vwapLw = totalAvgPriceVolLw / totalVol; def vwapHL2 = totalAvgPriceVolHL2 / totalVol; #Reference must move at least 1% or previous ref constinues def RefMinIncDecimal = (RefMinChangePercent *.01) + 1; def RefMinDecDecimal = 1 - (RefMinChangePercent *.01); def RefPriceUp = if (FirstBar or MinFromPrevBar >= 5) then Bar0AvgPriceUp else if (vwapUp >= RefPriceUp[1] * RefMinIncDecimal) or (vwapUp <= RefPriceUp[1] * RefMinDecDecimal) then vwapUp else RefPriceUp[1]; def RefPriceLw = if (FirstBar or MinFromPrevBar >= 5) then Bar0AvgPriceLw else if (vwapLw >= RefPriceLw[1] * RefMinIncDecimal) or (vwapLw <= RefPriceLw[1] * RefMinDecDecimal) then vwapLw else RefPriceLw[1]; def RefPriceHL2 = if (FirstBar or MinFromPrevBar >= 5) then Bar0AvgPriceHL2 else if (vwapHL2 >= RefPriceHL2[1] * RefMinIncDecimal) or (vwapHL2 <= RefPriceHL2[1] * RefMinDecDecimal) then vwapHL2 else RefPriceHL2[1]; #============= CALCULATE BANDS ==============# def UpperBand1 = if LowPriced and OpeningHours then RefPriceUp + Min(RefPriceUp * LowPriceOpenPercent * .01 , LowPriceOpenAlt) else if LowPriced and NormalHours then RefPriceUp + Min(RefPriceUp * LowPriceNormalPercent * .01 , LowPriceNormalAlt) else if LowPriced and ClosingHours then RefPriceUp + Min(RefPriceUp * LowPriceClosePercentUp * .01 , LowPriceCloseAltUp) else if MidPriced and OpeningHours then RefPriceUp + (RefPriceUp * MidPriceOpenPercent * .01) else if MidPriced and NormalHours then RefPriceUp + (RefPriceUp * MidPriceNormalPercent * .01) else if MidPriced and ClosingHours then RefPriceUp + (RefPriceUp * MidPriceClosePercent * .01) else if HighPriced and OpeningHours then RefPriceUp + (RefPriceUp * HighPriceOpenPercent * .01) else if HighPriced and NormalHours then RefPriceUp + (RefPriceUp * HighPriceNormalPercent * .01) else if HighPriced and ClosingHours then RefPriceUp + (RefPriceUp * HighPriceClosePercent * .01) else Double.NaN; def UpperBand2 = if LowPriced and OpeningHours then RefPriceHL2 + Min(RefPriceHL2 * LowPriceOpenPercent * .01 , LowPriceOpenAlt) else if LowPriced and NormalHours then RefPriceHL2 + Min(RefPriceHL2 * LowPriceNormalPercent * .01 , LowPriceNormalAlt) else if LowPriced and ClosingHours then RefPriceHL2 + Min(RefPriceHL2 * LowPriceClosePercentUp * .01 , LowPriceCloseAltUp) else if MidPriced and OpeningHours then RefPriceHL2 + (RefPriceHL2 * MidPriceOpenPercent * .01) else if MidPriced and NormalHours then RefPriceHL2 + (RefPriceHL2 * MidPriceNormalPercent * .01) else if MidPriced and ClosingHours then RefPriceHL2 + (RefPriceHL2 * MidPriceClosePercent * .01) else if HighPriced and OpeningHours then RefPriceHL2 + (RefPriceHL2 * HighPriceOpenPercent * .01) else if HighPriced and NormalHours then RefPriceHL2 + (RefPriceHL2 * HighPriceNormalPercent * .01) else if HighPriced and ClosingHours then RefPriceHL2 + (RefPriceHL2 * HighPriceClosePercent * .01) else Double.NaN; def LowerBand1 = if LowPriced and OpeningHours then RefPriceLw - Min(RefPriceLw * LowPriceOpenPercent * .01 , LowPriceOpenAlt) else if LowPriced and NormalHours then RefPriceLw - Min(RefPriceLw * LowPriceNormalPercent * .01 , LowPriceNormalAlt) else if LowPriced and ClosingHours and LowPriceClosePercentDown == 0 and LowPriceCloseAltDown == 0 then 0 else if LowPriced and ClosingHours and (LowPriceClosePercentDown == 0 or LowPriceCloseAltDown == 0) then RefPriceLw - Max(RefPriceLw * LowPriceClosePercentDown * .01 , LowPriceCloseAltDown) else if LowPriced and ClosingHours then RefPriceLw - Min(RefPriceLw * LowPriceClosePercentDown * .01 , LowPriceCloseAltDown) else if MidPriced and OpeningHours then RefPriceLw - (RefPriceLw * MidPriceOpenPercent * .01) else if MidPriced and NormalHours then RefPriceLw - (RefPriceLw * MidPriceNormalPercent * .01) else if MidPriced and ClosingHours then RefPriceLw - (RefPriceLw * MidPriceClosePercent * .01) else if HighPriced and OpeningHours then RefPriceLw - (RefPriceLw * HighPriceOpenPercent * .01) else if HighPriced and NormalHours then RefPriceLw - (RefPriceLw * HighPriceNormalPercent * .01) else if HighPriced and ClosingHours then RefPriceLw - (RefPriceLw * HighPriceClosePercent * .01) else Double.NaN; def LowerBand2 = if LowPriced and OpeningHours then RefPriceHL2 - Min(RefPriceHL2 * LowPriceOpenPercent * .01 , LowPriceOpenAlt) else if LowPriced and NormalHours then RefPriceHL2 - Min(RefPriceHL2 * LowPriceNormalPercent * .01 , LowPriceNormalAlt) else if LowPriced and ClosingHours and LowPriceClosePercentDown == 0 and LowPriceCloseAltDown == 0 then 0 else if LowPriced and ClosingHours and (LowPriceClosePercentDown == 0 or LowPriceCloseAltDown == 0) then RefPriceHL2 - Max(RefPriceHL2 * LowPriceClosePercentDown * .01 , LowPriceCloseAltDown) else if LowPriced and ClosingHours then RefPriceHL2 - Min(RefPriceHL2 * LowPriceClosePercentDown * .01 , LowPriceCloseAltDown) else if MidPriced and OpeningHours then RefPriceHL2 - (RefPriceHL2 * MidPriceOpenPercent * .01) else if MidPriced and NormalHours then RefPriceHL2 - (RefPriceHL2 * MidPriceNormalPercent * .01) else if MidPriced and ClosingHours then RefPriceHL2 - (RefPriceHL2 * MidPriceClosePercent * .01) else if HighPriced and OpeningHours then RefPriceHL2 - (RefPriceHL2 * HighPriceOpenPercent * .01) else if HighPriced and NormalHours then RefPriceHL2 - (RefPriceHL2 * HighPriceNormalPercent * .01) else if HighPriced and ClosingHours then RefPriceHL2 - (RefPriceHL2 * HighPriceClosePercent * .01) else Double.NaN; # Adjust bands very slightly inward (by default 1% of the difference between ref price and band) def EstimateAdj = Estimate_Adjustment_Percent * .01; def adjustedUpper1 = if UpperBand1 <= UpperBand2 then UpperBand1 - ((UpperBand1 - RefPriceUp) * EstimateAdj) else UpperBand1; def adjustedUpper2 = if UpperBand2 < UpperBand1 then UpperBand2 - ((UpperBand2 - RefPriceHL2) * EstimateAdj) else UpperBand2; def adjustedLower1 = if LowerBand1 >= LowerBand2 and LowerBand1 != 0 then LowerBand1 + ((RefPriceLw - LowerBand1) * EstimateAdj) else LowerBand1; def adjustedLower2 = if LowerBand2 > LowerBand1 and LowerBand2 != 0 then LowerBand2 + ((RefPriceHL2 - LowerBand2) * EstimateAdj) else LowerBand2; # Adjust a band when prior bar closed at likely true band limit, but 15 seconds requirement likely carries band over to current bar def carryOverBand = high == lastBarClose and low == lastBarClose; def FinalUpper1 = if !FirstBar then if lastBarClose >= adjustedUpper1[1] and carryOverBand and MinFromPrevBar == 1 then lastBarClose else adjustedUpper1 else adjustedUpper1; def FinalUpper2 = if !FirstBar then if lastBarClose >= adjustedUpper2[1] and carryOverBand and MinFromPrevBar == 1 then lastBarClose else Max(adjustedUpper1 , adjustedUpper2) else adjustedUpper2; def FinalLower1 = if !FirstBar then if lastBarClose <= adjustedLower1[1] and carryOverBand and MinFromPrevBar == 1 then lastBarClose else adjustedLower1 else adjustedLower1; def FinalLower2 = if !FirstBar then if lastBarClose <= adjustedLower2[1] and carryOverBand and MinFromPrevBar == 1 then lastBarClose else Min(adjustedLower1 , adjustedLower2) else adjustedLower2; def nearestUpper = Min(FinalUpper1, FinalUpper2); def nearestLower = Max(FinalLower1, FinalLower2); # 1st bar bands will extend to right 1 bar so 1st bar is visible live def FinalUpper1Ext = if ExtendBar1 then FinalUpper1[1] else FinalUpper1; def FinalUpper2Ext = if ExtendBar1 then FinalUpper2[1] else FinalUpper2; def FinalLower1Ext = if ExtendBar1 then FinalLower1[1] else FinalLower1; def FinalLower2Ext = if ExtendBar1 then FinalLower2[1] else FinalLower2; def nearestUpperExt = if ExtendBar1 then nearestUpper[1] else nearestUpper; def nearestLowerExt = if ExtendBar1 then nearestLower[1] else nearestLower; def mode; switch (display_Mode) { case Range: mode = 1; #Range case Area: mode = 2; #Area case Line: mode = 3; } #Line AddCloud(if Qualifies and mode == 1 then FinalUpper1Ext else Double.NaN, FinalUpper2Ext, Color.GRAY, Color.GRAY, 1); AddCloud(if Qualifies and mode == 1 then FinalLower1Ext else Double.NaN, FinalLower2Ext, Color.GRAY, Color.GRAY, 1); AddCloud(if Qualifies and mode == 2 then nearestUpperExt else Double.NaN, Double.POSITIVE_INFINITY, Color.GRAY, Color.GRAY, 0); AddCloud(if Qualifies and mode == 2 then nearestLowerExt else Double.NaN, Double.NEGATIVE_INFINITY, Color.GRAY, Color.GRAY, 0); plot Upper_Line = if Qualifies and mode == 3 then nearestUpperExt else Double.NaN; plot Lower_Line = if Qualifies and mode == 3 then nearestLowerExt else Double.NaN; AddLabel(Qualifies and show_labels, "HaltUp: " + AsDollars(nearestUpper) + " HaltLw: " + (if nearestLower == 0 then "None" else AsDollars(nearestLower)), Color.ORANGE); Last edited: What is useThinkScript? useThinkScript is the #1 community of stock market investors using indicators and other tools to power their trading strategies. Traders of all skill levels use our forums to learn about scripting and indicators, help each other, and discover new ways to gain an edge in the markets. How do I get started? We get it. Our forum can be intimidating, if not overwhelming. With thousands of topics, tens of thousands of posts, our community has created an incredibly deep knowledge base for stock traders. No one can ever exhaust every resource provided on our site. If you are new, or just looking for guidance, here are some helpful links to get you started. • The most viewed thread: • Our most popular indicator: • Answers to frequently asked questions: What are the benefits of VIP Membership? VIP members get exclusive access to these proven and tested premium indicators: Buy the Dip, Advanced Market Moves 2.0, Take Profit, and Volatility Trading Range. In addition, VIP members get access to over 50 VIP-only custom indicators, add-ons, and strategies, private VIP-only forums, private Discord channel to discuss trades and strategies in real-time, customer support, trade alerts, and much more. Learn all about VIP membership here. How can I access the premium indicators? To access the premium indicators, which are plug and play ready, sign up for VIP membership here.
{"url":"https://usethinkscript.com/threads/circuit-breaker-halt-indicator-for-thinkorswim.1043/","timestamp":"2024-11-02T20:31:50Z","content_type":"text/html","content_length":"200682","record_id":"<urn:uuid:5595b9a1-37a5-48a7-8a8f-4707ba323858>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00811.warc.gz"}
Dynamic programming Jump to navigation Jump to search Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics. In both contexts it refers to simplifying a complicated problem by breaking it down into simpler sub-problems in a recursive manner. While some decision problems cannot be taken apart this way, decisions that span several points in time do often break apart recursively. Likewise, in computer science, if a problem can be solved optimally by breaking it into sub-problems and then recursively finding the optimal solutions to the sub-problems, then it is said to have optimal substructure. If sub-problems can be nested recursively inside larger problems, so that dynamic programming methods are applicable, then there is a relation between the value of the larger problem and the values of the sub-problems.^[1] In the optimization literature this relationship is called the Bellman equation. Mathematical optimization[edit] In terms of mathematical optimization, dynamic programming usually refers to simplifying a decision by breaking it down into a sequence of decision steps over time. This is done by defining a sequence of value functions V[1], V[2], ..., V[n] taking y as an argument representing the state of the system at times i from 1 to n. The definition of V[n](y) is the value obtained in state y at the last time n. The values V[i] at earlier times i = n −1, n − 2, ..., 2, 1 can be found by working backwards, using a recursive relationship called the Bellman equation. For i = 2, ..., n, V[i−1] at any state y is calculated from V[i] by maximizing a simple function (usually the sum) of the gain from a decision at time i − 1 and the function V[i] at the new state of the system if this decision is made. Since V[i] has already been calculated for the needed states, the above operation yields V[i−1] for those states. Finally, V[1] at the initial state of the system is the value of the optimal solution. The optimal values of the decision variables can be recovered, one by one, by tracking back the calculations already performed. Control theory[edit] In control theory, a typical problem is to find an admissible control ${\displaystyle \mathbf {u} ^{\ast }}$ which causes the system ${\displaystyle {\dot {\mathbf {x} }}(t)=\mathbf {g} \left(\mathbf {x} (t),\mathbf {u} (t),t\right)}$ to follow an admissible trajectory ${\displaystyle \mathbf {x} ^{\ast }}$ on a continuous time interval ${\displaystyle t_{0}\leq t\leq t_{1}}$ that minimizes a cost function ${\displaystyle J=b\left(\mathbf {x} (t_{1}),t_{1}\right)+\int _{t_{0}}^{t_{1}}f\left(\mathbf {x} (t),\mathbf {u} (t),t\right)\mathrm {d} t}$ The solution to this problem is an optimal control law or policy ${\displaystyle \mathbf {u} ^{\ast }=h(\mathbf {x} (t),t)}$, which produces an optimal trajectory ${\displaystyle \mathbf {x} ^{\ast }}$ and an optimized loss function ${\displaystyle J^{\ast }}$. The latter obeys the fundamental equation of dynamic programming: ${\displaystyle -J_{t}^{\ast }=\min _{\mathbf {u} }\left\{f\left(\mathbf {x} (t),\mathbf {u} (t),t\right)+J_{x}^{\ast {\mathsf {T}}}\mathbf {g} \left(\mathbf {x} (t),\mathbf {u} (t),t\right)\ a partial differential equation known as the Hamilton–Jacobi–Bellman equation, in which ${\displaystyle J_{x}^{\ast }={\frac {\partial J^{\ast }}{\partial \mathbf {x} }}=\left[{\frac {\partial J^{\ ast }}{\partial x_{1}}}~~~~{\frac {\partial J^{\ast }}{\partial x_{2}}}~~~~\dots ~~~~{\frac {\partial J^{\ast }}{\partial x_{n}}}\right]^{\mathsf {T}}}$ and ${\displaystyle J_{t}^{\ast }={\frac {\ partial J^{\ast }}{\partial t}}}$. One finds the minimizing ${\displaystyle \mathbf {u} }$ in terms of ${\displaystyle t}$, ${\displaystyle \mathbf {x} }$, and the unknown function ${\displaystyle J_ {x}^{\ast }}$ and then substitutes the result into the Hamilton–Jacobi–Bellman equation to get the partial differential equation to be solved with boundary condition ${\displaystyle J\left(t_{1}\ right)=b\left(\mathbf {x} (t_{1}),t_{1}\right)}$.^[2] In practice, this generally requires numerical techniques for some discrete approximation to the exact optimization relationship. Alternatively, the continuous process can be approximated by a discrete system, which leads to a following recurrence relation analog to the Hamilton–Jacobi–Bellman equation: ${\displaystyle J_{k}^{\ast }\left(\mathbf {x} _{n-k}\right)=\min _{\mathbf {u} _{n-k}}\left\{{\hat {f}}\left(\mathbf {x} _{n-k},\mathbf {u} _{n-k}\right)+J_{k-1}^{\ast }\left({\hat {g}}\left(\ mathbf {x} _{n-k},\mathbf {u} _{n-k}\right)\right)\right\}}$ at the ${\displaystyle k}$-th stage of ${\displaystyle n}$ equally spaced discrete time intervals, and where ${\displaystyle {\hat {f}}}$ and ${\displaystyle {\hat {g}}}$ denote discrete approximations to ${\displaystyle f}$ and ${\displaystyle \mathbf {g} }$. This functional equation is known as the Bellman equation, which can be solved for an exact solution of the discrete approximation of the optimization equation.^[3] Example from economics: Ramsey's problem of optimal saving[edit] In economics, the objective is generally to maximize (rather than minimize) some dynamic social welfare function. In Ramsey's problem, this function relates amounts of consumption to levels of utility. Loosely speaking, the planner faces the trade-off between contemporaneous consumption and future consumption (via investment in capital stock that is used in production), known as intertemporal choice. Future consumption is discounted at a constant rate ${\displaystyle \beta \in (0,1)}$. A discrete approximation to the transition equation of capital is given by ${\displaystyle k_{t+1}={\hat {g}}\left(k_{t},c_{t}\right)=f(k_{t})-c_{t}}$ where ${\displaystyle c}$ is consumption, ${\displaystyle k}$ is capital, and ${\displaystyle f}$ is a production function satisfying the Inada conditions. An initial capital stock ${\displaystyle k_ {0}>0}$ is assumed. Let ${\displaystyle c_{t}}$ be consumption in period t, and assume consumption yields utility ${\displaystyle u(c_{t})=\ln(c_{t})}$ as long as the consumer lives. Assume the consumer is impatient, so that he discounts future utility by a factor b each period, where ${\displaystyle 0<b<1}$. Let ${\displaystyle k_{t}}$ be capital in period t. Assume initial capital is a given amount ${\displaystyle k_{0}>0}$, and suppose that this period's capital and consumption determine next period's capital as ${\displaystyle k_{t+1}=Ak_{t}^{a}-c_{t}}$, where A is a positive constant and ${\displaystyle 0<a <1}$. Assume capital cannot be negative. Then the consumer's decision problem can be written as follows: ${\displaystyle \max \sum _{t=0}^{T}b^{t}\ln(c_{t})}$ subject to ${\displaystyle k_{t+1}=Ak_{t}^{a}-c_{t}\geq 0}$ for all ${\displaystyle t=0,1,2,\ldots ,T}$ Written this way, the problem looks complicated, because it involves solving for all the choice variables ${\displaystyle c_{0},c_{1},c_{2},\ldots ,c_{T}}$. (Note that ${\displaystyle k_{0}}$ is not a choice variable—the consumer's initial capital is taken as given.) The dynamic programming approach to solve this problem involves breaking it apart into a sequence of smaller decisions. To do so, we define a sequence of value functions ${\displaystyle V_{t}(k)}$, for ${\displaystyle t=0,1,2,\ldots ,T,T+1}$ which represent the value of having any amount of capital k at each time t. Note that ${\displaystyle V_{T+1}(k)=0}$, that is, there is (by assumption) no utility from having capital after death. The value of any quantity of capital at any previous time can be calculated by backward induction using the Bellman equation. In this problem, for each ${\displaystyle t=0,1,2,\ldots ,T}$, the Bellman equation is ${\displaystyle V_{t}(k_{t})\,=\,\max \left(\ln(c_{t})+bV_{t+1}(k_{t+1})\right)}$ subject to ${\displaystyle k_{t+1}=Ak_{t}^{a}-c_{t}\geq 0}$ This problem is much simpler than the one we wrote down before, because it involves only two decision variables, ${\displaystyle c_{t}}$ and ${\displaystyle k_{t+1}}$. Intuitively, instead of choosing his whole lifetime plan at birth, the consumer can take things one step at a time. At time t, his current capital ${\displaystyle k_{t}}$ is given, and he only needs to choose current consumption ${\displaystyle c_{t}}$ and saving ${\displaystyle k_{t+1}}$. To actually solve this problem, we work backwards. For simplicity, the current level of capital is denoted as k. ${\displaystyle V_{T+1}(k)}$ is already known, so using the Bellman equation once we can calculate ${\displaystyle V_{T}(k)}$, and so on until we get to ${\displaystyle V_{0}(k)}$, which is the value of the initial decision problem for the whole lifetime. In other words, once we know ${\displaystyle V_{T-j+1}(k)}$, we can calculate ${\displaystyle V_{T-j}(k)}$, which is the maximum of ${\displaystyle \ln(c_{T-j})+bV_{T-j+1}(Ak^{a}-c_{T-j})}$, where ${\displaystyle c_{T-j}}$ is the choice variable and ${\displaystyle Ak^{a}-c_{T-j}\geq 0}$. Working backwards, it can be shown that the value function at time ${\displaystyle t=T-j}$ is ${\displaystyle V_{T-j}(k)\,=\,a\sum _{i=0}^{j}a^{i}b^{i}\ln k+v_{T-j}}$ where each ${\displaystyle v_{T-j}}$ is a constant, and the optimal amount to consume at time ${\displaystyle t=T-j}$ is ${\displaystyle c_{T-j}(k)\,=\,{\frac {1}{\sum _{i=0}^{j}a^{i}b^{i}}}Ak^{a}}$ which can be simplified to {\displaystyle {\begin{aligned}c_{T}(k)&=Ak^{a}\\c_{T-1}(k)&={\frac {Ak^{a}}{1+ab}}\\c_{T-2}(k)&={\frac {Ak^{a}}{1+ab+a^{2}b^{2}}}\\&\dots \\c_{2}(k)&={\frac {Ak^{a}}{1+ab+a^{2}b^{2}+\ldots +a^ {T-2}b^{T-2}}}\\c_{1}(k)&={\frac {Ak^{a}}{1+ab+a^{2}b^{2}+\ldots +a^{T-2}b^{T-2}+a^{T-1}b^{T-1}}}\\c_{0}(k)&={\frac {Ak^{a}}{1+ab+a^{2}b^{2}+\ldots +a^{T-2}b^{T-2}+a^{T-1}b^{T-1}+a^{T}b^{T}}}\end We see that it is optimal to consume a larger fraction of current wealth as one gets older, finally consuming all remaining wealth in period T, the last period of life. Computer programming[edit] There are two key attributes that a problem must have in order for dynamic programming to be applicable: optimal substructure and overlapping sub-problems. If a problem can be solved by combining optimal solutions to non-overlapping sub-problems, the strategy is called "divide and conquer" instead^[1]. This is why merge sort and quick sort are not classified as dynamic programming problems. Optimal substructure means that the solution to a given optimization problem can be obtained by the combination of optimal solutions to its sub-problems. Such optimal substructures are usually described by means of recursion. For example, given a graph G=(V,E), the shortest path p from a vertex u to a vertex v exhibits optimal substructure: take any intermediate vertex w on this shortest path p. If p is truly the shortest path, then it can be split into sub-paths p[1] from u to w and p[2] from w to v such that these, in turn, are indeed the shortest paths between the corresponding vertices (by the simple cut-and-paste argument described in Introduction to Algorithms). Hence, one can easily formulate the solution for finding shortest paths in a recursive manner, which is what the Bellman–Ford algorithm or the Floyd–Warshall algorithm does. Overlapping sub-problems means that the space of sub-problems must be small, that is, any recursive algorithm solving the problem should solve the same sub-problems over and over, rather than generating new sub-problems. For example, consider the recursive formulation for generating the Fibonacci series: F[i] = F[i−1] + F[i−2], with base case F[1] = F[2] = 1. Then F[43] = F[42] + F[41], and F[42] = F[41] + F[40]. Now F[41] is being solved in the recursive sub-trees of both F[43] as well as F[42]. Even though the total number of sub-problems is actually small (only 43 of them), we end up solving the same problems over and over if we adopt a naive recursive solution such as this. Dynamic programming takes account of this fact and solves each sub-problem only once. This can be achieved in either of two ways: • Top-down approach: This is the direct fall-out of the recursive formulation of any problem. If the solution to any problem can be formulated recursively using the solution to its sub-problems, and if its sub-problems are overlapping, then one can easily memoize or store the solutions to the sub-problems in a table. Whenever we attempt to solve a new sub-problem, we first check the table to see if it is already solved. If a solution has been recorded, we can use it directly, otherwise we solve the sub-problem and add its solution to the table. • Bottom-up approach: Once we formulate the solution to a problem recursively as in terms of its sub-problems, we can try reformulating the problem in a bottom-up fashion: try solving the sub-problems first and use their solutions to build-on and arrive at solutions to bigger sub-problems. This is also usually done in a tabular form by iteratively generating solutions to bigger and bigger sub-problems by using the solutions to small sub-problems. For example, if we already know the values of F[41] and F[40], we can directly calculate the value of F[42]. Some programming languages can automatically memoize the result of a function call with a particular set of arguments, in order to speed up call-by-name evaluation (this mechanism is referred to as call-by-need). Some languages make it possible portably (e.g. Scheme, Common Lisp or Perl). Some languages have automatic memoization built in, such as tabled Prolog and J, which supports memoization with the M. adverb.^[4] In any case, this is only possible for a referentially transparent function. Memoization is also encountered as an easily accessible design pattern within term-rewrite based languages such as Wolfram Language. Dynamic programming is widely used in bioinformatics for the tasks such as sequence alignment, protein folding, RNA structure prediction and protein-DNA binding. The first dynamic programming algorithms for protein-DNA binding were developed in the 1970s independently by Charles DeLisi in USA^[5] and Georgii Gurskii and Alexander Zasedatelev in USSR.^[6] Recently these algorithms have become very popular in bioinformatics and computational biology, particularly in the studies of nucleosome positioning and transcription factor binding Examples: Computer algorithms[edit] Dijkstra's algorithm for the shortest path problem[edit] From a dynamic programming point of view, Dijkstra's algorithm for the shortest path problem is a successive approximation scheme that solves the dynamic programming functional equation for the shortest path problem by the Reaching method.^[7]^[8]^[9] In fact, Dijkstra's explanation of the logic behind the algorithm,^[10] namely Problem 2. Find the path of minimum total length between two given nodes ${\displaystyle P}$ and ${\displaystyle Q}$. We use the fact that, if ${\displaystyle R}$ is a node on the minimal path from ${\displaystyle P}$ to ${\displaystyle Q}$, knowledge of the latter implies the knowledge of the minimal path from ${\displaystyle P}$ to ${\displaystyle R}$. is a paraphrasing of Bellman's famous Principle of Optimality in the context of the shortest path problem. Fibonacci sequence[edit] Here is a naïve implementation of a function finding the nth member of the Fibonacci sequence, based directly on the mathematical definition: function fib(n) if n <= 1 return n return fib(n − 1) + fib(n − 2) Notice that if we call, say, fib(5), we produce a call tree that calls the function on the same value many different times: 1. fib(5) 2. fib(4) + fib(3) 3. (fib(3) + fib(2)) + (fib(2) + fib(1)) 4. ((fib(2) + fib(1)) + (fib(1) + fib(0))) + ((fib(1) + fib(0)) + fib(1)) 5. (((fib(1) + fib(0)) + fib(1)) + (fib(1) + fib(0))) + ((fib(1) + fib(0)) + fib(1)) In particular, fib(2) was calculated three times from scratch. In larger examples, many more values of fib, or subproblems, are recalculated, leading to an exponential time algorithm. Now, suppose we have a simple map object, m, which maps each value of fib that has already been calculated to its result, and we modify our function to use it and update it. The resulting function requires only O(n) time instead of exponential time (but requires O(n) space): var m := map(0 → 0, 1 → 1) function fib(n) if key n is not in map m m[n] := fib(n − 1) + fib(n − 2) return m[n] This technique of saving values that have already been calculated is called memoization; this is the top-down approach, since we first break the problem into subproblems and then calculate and store In the bottom-up approach, we calculate the smaller values of fib first, then build larger values from them. This method also uses O(n) time since it contains a loop that repeats n − 1 times, but it only takes constant (O(1)) space, in contrast to the top-down approach which requires O(n) space to store the map. function fib(n) if n = 0 return 0 var previousFib := 0, currentFib := 1 repeat n − 1 times // loop is skipped if n = 1 var newFib := previousFib + currentFib previousFib := currentFib currentFib := newFib return currentFib In both examples, we only calculate fib(2) one time, and then use it to calculate both fib(4) and fib(3), instead of computing it every time either of them is evaluated. Note that the above method actually takes ${\displaystyle \Omega (n^{2})}$ time for large n because addition of two integers with ${\displaystyle \Omega (n)}$ bits each takes ${\displaystyle \Omega (n)}$ time. (The n^th fibonacci number has ${\displaystyle \Omega (n)}$ bits.) Also, there is a closed form for the Fibonacci sequence, known as Binet's formula, from which the ${\displaystyle n}$-th term can be computed in approximately ${\displaystyle O(n(\log n)^{2})}$ time, which is more efficient than the above dynamic programming technique. However, the simple recurrence directly gives the matrix form that leads to an approximately ${\displaystyle O(n\log n)}$ algorithm by fast matrix exponentiation. A type of balanced 0–1 matrix[edit] Consider the problem of assigning values, either zero or one, to the positions of an n × n matrix, with n even, so that each row and each column contains exactly n / 2 zeros and n / 2 ones. We ask how many different assignments there are for a given ${\displaystyle n}$. For example, when n = 4, four possible solutions are ${\displaystyle {\begin{bmatrix}0&1&0&1\\1&0&1&0\\0&1&0&1\\1&0&1&0\end{bmatrix}}{\text{ and }}{\begin{bmatrix}0&0&1&1\\0&0&1&1\\1&1&0&0\\1&1&0&0\end{bmatrix}}{\text{ and }}{\begin{bmatrix}1&1&0&0 \\0&0&1&1\\1&1&0&0\\0&0&1&1\end{bmatrix}}{\text{ and }}{\begin{bmatrix}1&0&0&1\\0&1&1&0\\0&1&1&0\\1&0&0&1\end{bmatrix}}.}$ There are at least three possible approaches: brute force, backtracking, and dynamic programming. Brute force consists of checking all assignments of zeros and ones and counting those that have balanced rows and columns (n / 2 zeros and n / 2 ones). As there are ${\displaystyle {\tbinom {n}{n/2}} ^{n}}$ possible assignments, this strategy is not practical except maybe up to ${\displaystyle n=6}$. Backtracking for this problem consists of choosing some order of the matrix elements and recursively placing ones or zeros, while checking that in every row and column the number of elements that have not been assigned plus the number of ones or zeros are both at least n / 2. While more sophisticated than brute force, this approach will visit every solution once, making it impractical for n larger than six, since the number of solutions is already 116,963,796,250 for n = 8, as we shall see. Dynamic programming makes it possible to count the number of solutions without visiting them all. Imagine backtracking values for the first row – what information would we require about the remaining rows, in order to be able to accurately count the solutions obtained for each first row value? We consider k × n boards, where 1 ≤ k ≤ n, whose ${\displaystyle k}$ rows contain ${\displaystyle n/2}$ zeros and ${\displaystyle n/2}$ ones. The function f to which memoization is applied maps vectors of n pairs of integers to the number of admissible boards (solutions). There is one pair for each column, and its two components indicate respectively the number of zeros and ones that have yet to be placed in that column. We seek the value of ${\displaystyle f((n/2,n/2),(n/2,n/2),\ldots (n/2,n/ 2))}$ (${\displaystyle n}$ arguments or one vector of ${\displaystyle n}$ elements). The process of subproblem creation involves iterating over every one of ${\displaystyle {\tbinom {n}{n/2}}}$ possible assignments for the top row of the board, and going through every column, subtracting one from the appropriate element of the pair for that column, depending on whether the assignment for the top row contained a zero or a one at that position. If any one of the results is negative, then the assignment is invalid and does not contribute to the set of solutions (recursion stops). Otherwise, we have an assignment for the top row of the k × n board and recursively compute the number of solutions to the remaining (k − 1) × n board, adding the numbers of solutions for every admissible assignment of the top row and returning the sum, which is being memoized. The base case is the trivial subproblem, which occurs for a 1 × n board. The number of solutions for this board is either zero or one, depending on whether the vector is a permutation of n / 2 ${\displaystyle (0,1)}$ and n / 2 ${\displaystyle (1,0)}$ pairs or not. For example, in the first two boards shown above the sequences of vectors would be ((2, 2) (2, 2) (2, 2) (2, 2)) ((2, 2) (2, 2) (2, 2) (2, 2)) k = 4 ((1, 2) (2, 1) (1, 2) (2, 1)) ((1, 2) (1, 2) (2, 1) (2, 1)) k = 3 ((1, 1) (1, 1) (1, 1) (1, 1)) ((0, 2) (0, 2) (2, 0) (2, 0)) k = 2 ((0, 1) (1, 0) (0, 1) (1, 0)) ((0, 1) (0, 1) (1, 0) (1, 0)) k = 1 ((0, 0) (0, 0) (0, 0) (0, 0)) ((0, 0) (0, 0), (0, 0) (0, 0)) The number of solutions (sequence A058527 in the OEIS) is ${\displaystyle 1,\,2,\,90,\,297200,\,116963796250,\,6736218287430460752,\ldots }$ Links to the MAPLE implementation of the dynamic programming approach may be found among the external links. Consider a checkerboard with n × n squares and a cost-function c(i, j) which returns a cost associated with square (i,j) (i being the row, j being the column). For instance (on a 5 × 5 checkerboard), 2 – 6 7 0 – 1 – – *5* – – Thus c(1, 3) = 5 Let us say there was a checker that could start at any square on the first rank (i.e., row) and you wanted to know the shortest path (the sum of the minimum costs at each visited rank) to get to the last rank; assuming the checker could move only diagonally left forward, diagonally right forward, or straight forward. That is, a checker on (1,3) can move to (2,2), (2,3) or (2,4). 2 x x x 1 o This problem exhibits optimal substructure. That is, the solution to the entire problem relies on solutions to subproblems. Let us define a function q(i, j) as q(i, j) = the minimum cost to reach square (i, j). Starting at rank n and descending to rank 1, we compute the value of this function for all the squares at each successive rank. Picking the square that holds the minimum value at each rank gives us the shortest path between rank n and rank 1. Note that q(i, j) is equal to the minimum cost to get to any of the three squares below it (since those are the only squares that can reach it) plus c(i, j). For instance: 4 A 3 B C D ${\displaystyle q(A)=\min(q(B),q(C),q(D))+c(A)\,}$ Now, let us define q(i, j) in somewhat more general terms: ${\displaystyle q(i,j)={\begin{cases}\infty &j<1{\text{ or }}j>n\\c(i,j)&i=1\\\min(q(i-1,j-1),q(i-1,j),q(i-1,j+1))+c(i,j)&{\text{otherwise.}}\end{cases}}}$ The first line of this equation is there to make the recursive property simpler (when dealing with the edges, so we need only one recursion). The second line says what happens in the last rank, to provide a base case. The third line, the recursion, is the important part. It is similar to the A,B,C,D example. From this definition we can derive straightforward recursive code for q(i, j). In the following pseudocode, n is the size of the board, c(i, j) is the cost-function, and min() returns the minimum of a number of values: function minCost(i, j) if j < 1 or j > n return infinity else if i = 1 return c(i, j) return min( minCost(i-1, j-1), minCost(i-1, j), minCost(i-1, j+1) ) + c(i, j) It should be noted that this function only computes the path-cost, not the actual path. We discuss the actual path below. This, like the Fibonacci-numbers example, is horribly slow since it wastes time recomputing the same shortest paths over and over. However, we can compute it much faster in a bottom-up fashion if we store path-costs in a two-dimensional array q[i, j] rather than using a function. This avoids recomputation; all the values needed for array q[i, j] are computed ahead of time only once. Precomputed values for (i,j) are simply looked-up whenever needed. We also need to know what the actual shortest path is. To do this, we use another array p[i, j], a predecessor array. This array indirectly records the path to any square s. The indirection to s is modeled as offsets relative to the index of the previously-computed shortest path. To reconstruct the path, we lookup the predecessor of s, then the predecessor of that square, then the predecessor of that square, and so on recursively, until we reach the starting square. Consider the following code: function computeShortestPathArrays() for x from 1 to n q[1, x] := c(1, x) for y from 1 to n q[y, 0] := infinity q[y, n + 1] := infinity for y from 2 to n for x from 1 to n m := min(q[y-1, x-1], q[y-1, x], q[y-1, x+1]) q[y, x] := m + c(y, x) if m = q[y-1, x-1] p[y, x] := -1 else if m = q[y-1, x] p[y, x] := 0 p[y, x] := 1 Now the rest is a simple matter of finding the minimum and printing it. function computeShortestPath() minIndex := 1 min := q[n, 1] for i from 2 to n if q[n, i] < min minIndex := i min := q[n, i] printPath(n, minIndex) function printPath(y, x) if y = 2 print(x + p[y, x]) printPath(y-1, x + p[y, x]) Sequence alignment[edit] In genetics, sequence alignment is an important application where dynamic programming is essential.^[11] Typically, the problem consists of transforming one sequence into another using edit operations that replace, insert, or remove an element. Each operation has an associated cost, and the goal is to find the sequence of edits with the lowest total cost. The problem can be stated naturally as a recursion, a sequence A is optimally edited into a sequence B by either: 1. inserting the first character of B, and performing an optimal alignment of A and the tail of B 2. deleting the first character of A, and performing the optimal alignment of the tail of A and B 3. replacing the first character of A with the first character of B, and performing optimal alignments of the tails of A and B. The partial alignments can be tabulated in a matrix, where cell (i,j) contains the cost of the optimal alignment of A[1..i] to B[1..j]. The cost in cell (i,j) can be calculated by adding the cost of the relevant operations to the cost of its neighboring cells, and selecting the optimum. Different variants exist, see Smith–Waterman algorithm and Needleman–Wunsch algorithm. Tower of Hanoi puzzle[edit] The Tower of Hanoi or Towers of Hanoi is a mathematical game or puzzle. It consists of three rods, and a number of disks of different sizes which can slide onto any rod. The puzzle starts with the disks in a neat stack in ascending order of size on one rod, the smallest at the top, thus making a conical shape. The objective of the puzzle is to move the entire stack to another rod, obeying the following rules: • Only one disk may be moved at a time. • Each move consists of taking the upper disk from one of the rods and sliding it onto another rod, on top of the other disks that may already be present on that rod. • No disk may be placed on top of a smaller disk. The dynamic programming solution consists of solving the functional equation S(n,h,t) = S(n-1,h, not(h,t)) ; S(1,h,t) ; S(n-1,not(h,t),t) where n denotes the number of disks to be moved, h denotes the home rod, t denotes the target rod, not(h,t) denotes the third rod (neither h nor t), ";" denotes concatenation, and S(n, h, t) := solution to a problem consisting of n disks that are to be moved from rod h to rod t. Note that for n=1 the problem is trivial, namely S(1,h,t) = "move a disk from rod h to rod t" (there is only one disk left). The number of moves required by this solution is 2^n − 1. If the objective is to maximize the number of moves (without cycling) then the dynamic programming functional equation is slightly more complicated and 3^n − 1 moves are required.^[12] Egg dropping puzzle[edit] The following is a description of the instance of this famous puzzle involving N=2 eggs and a building with H=36 floors:^[13] Suppose that we wish to know which stories in a 36-story building are safe to drop eggs from, and which will cause the eggs to break on landing (using U.S. English terminology, in which the first floor is at ground level). We make a few assumptions: □ An egg that survives a fall can be used again. □ A broken egg must be discarded. □ The effect of a fall is the same for all eggs. □ If an egg breaks when dropped, then it would break if dropped from a higher window. □ If an egg survives a fall, then it would survive a shorter fall. □ It is not ruled out that the first-floor windows break eggs, nor is it ruled out that eggs can survive the 36th-floor windows. If only one egg is available and we wish to be sure of obtaining the right result, the experiment can be carried out in only one way. Drop the egg from the first-floor window; if it survives, drop it from the second-floor window. Continue upward until it breaks. In the worst case, this method may require 36 droppings. Suppose 2 eggs are available. What is the lowest number of egg-droppings that is guaranteed to work in all cases? To derive a dynamic programming functional equation for this puzzle, let the state of the dynamic programming model be a pair s = (n,k), where n = number of test eggs available, n = 0, 1, 2, 3, ..., N − 1. k = number of (consecutive) floors yet to be tested, k = 0, 1, 2, ..., H − 1. For instance, s = (2,6) indicates that two test eggs are available and 6 (consecutive) floors are yet to be tested. The initial state of the process is s = (N,H) where N denotes the number of test eggs available at the commencement of the experiment. The process terminates either when there are no more test eggs (n = 0) or when k = 0, whichever occurs first. If termination occurs at state s = (0,k) and k > 0, then the test failed. Now, let W(n,k) = minimum number of trials required to identify the value of the critical floor under the worst-case scenario given that the process is in state s = (n,k). Then it can be shown that^[14] W(n,k) = 1 + min{max(W(n − 1, x − 1), W(n,k − x)): x = 1, 2, ..., k } with W(n,0) = 0 for all n > 0 and W(1,k) = k for all k. It is easy to solve this equation iteratively by systematically increasing the values of n and k. An interactive online facility is available for experimentation with this model as well as with other versions of this puzzle (e.g. when the objective is to minimize the expected value of the number of trials.)^[14] Faster DP solution using a different parametrization[edit] Notice that the above solution takes ${\displaystyle O(nk^{2})}$ time with a DP solution. This can be improved to ${\displaystyle O(nk\log k)}$ time by binary searching on the optimal ${\displaystyle x}$ in the above recurrence, since ${\displaystyle W(n-1,x-1)}$ is increasing in ${\displaystyle x}$ while ${\displaystyle W(n,k-x)}$ is decreasing in ${\displaystyle x}$, thus a local minimum of ${\ displaystyle \max(W(n-1,x-1),W(n,k-x))}$ is a global minimum. Also, by storing the optimal ${\displaystyle x}$ for each cell in the DP table and referring to its value for the previous cell, the optimal ${\displaystyle x}$ for each cell can be found in constant time, improving it to ${\displaystyle O(nk)}$ time. However, there is an even faster solution that involves a different parametrization of the problem: Let ${\displaystyle k}$ be the total number of floors such that the eggs break when dropped from the ${\displaystyle k}$th floor (The example above is equivalent to taking ${\displaystyle k=37}$). Let ${\displaystyle m}$ be the minimum floor from which the egg must be dropped to be broken. Let ${\displaystyle f(t,n)}$ be the maximum number of values of ${\displaystyle m}$ that are distinguishable using ${\displaystyle t}$ tries and ${\displaystyle n}$ eggs. Then ${\displaystyle f(t,0)=f(0,n)=1}$ for all ${\displaystyle t,n\geq 0}$. Let ${\displaystyle a}$ be the floor from which the first egg is dropped in the optimal strategy. If the first egg broke, ${\displaystyle m}$ is from ${\displaystyle 1}$ to ${\displaystyle a}$ and distinguishable using at most ${\displaystyle t-1}$ tries and ${\displaystyle n-1}$ eggs. If the first egg did not break, ${\displaystyle m}$ is from ${\displaystyle a+1}$ to ${\displaystyle k}$ and distinguishable using ${\displaystyle t-1}$ tries and ${\displaystyle n}$ eggs. Therefore, ${\displaystyle f(t,n)=f(t-1,n-1)+f(t-1,n)}$. Then the problem is equivalent to finding the minimum ${\displaystyle x}$ such that ${\displaystyle f(x,n)\geq k}$. To do so, we could compute ${\displaystyle \{f(t,i):0\leq i\leq n\}}$ in order of increasing ${\displaystyle t}$, which would take ${\displaystyle O(nx)}$ time. Thus, if we separately handle the case of ${\displaystyle n=1}$, the algorithm would take ${\displaystyle O(n{\sqrt {k}})}$ time. But the recurrence relation can in fact be solved, giving ${\displaystyle f(t,n)=\sum _{i=0}^{n}{\binom {t}{i}}}$, which can be computed in ${\displaystyle O(n)}$ time using the identity ${\ displaystyle {\binom {t}{i+1}}={\binom {t}{i}}{\frac {t-i}{i+1}}}$ for all ${\displaystyle i\geq 0}$. Since ${\displaystyle f(t,n)\leq f(t+1,n)}$ for all ${\displaystyle t\geq 0}$, we can binary search on ${\displaystyle t}$ to find ${\displaystyle x}$, giving an ${\displaystyle O(n\log k)}$ algorithm. ^[15] Matrix chain multiplication[edit] Matrix chain multiplication is a well-known example that demonstrates utility of dynamic programming. For example, engineering applications often have to multiply a chain of matrices. It is not surprising to find matrices of large dimensions, for example 100×100. Therefore, our task is to multiply matrices ${\displaystyle A_{1},A_{2},....A_{n}}$. As we know from basic linear algebra, matrix multiplication is not commutative, but is associative; and we can multiply only two matrices at a time. So, we can multiply this chain of matrices in many different ways, for example: ((A[1] × A[2]) × A[3]) × ... A[n] A[1]×(((A[2]×A[3])× ... ) × A[n]) (A[1] × A[2]) × (A[3] × ... A[n]) and so on. There are numerous ways to multiply this chain of matrices. They will all produce the same final result, however they will take more or less time to compute, based on which particular matrices are multiplied. If matrix A has dimensions m×n and matrix B has dimensions n×q, then matrix C=A×B will have dimensions m×q, and will require m*n*q scalar multiplications (using a simplistic matrix multiplication algorithm for purposes of illustration). For example, let us multiply matrices A, B and C. Let us assume that their dimensions are m×n, n×p, and p×s, respectively. Matrix A×B×C will be of size m×s and can be calculated in two ways shown 1. Ax(B×C) This order of matrix multiplication will require nps + mns scalar multiplications. 2. (A×B)×C This order of matrix multiplication will require mnp + mps scalar calculations. Let us assume that m = 10, n = 100, p = 10 and s = 1000. So, the first way to multiply the chain will require 1,000,000 + 1,000,000 calculations. The second way will require only 10,000+100,000 calculations. Obviously, the second way is faster, and we should multiply the matrices using that arrangement of parenthesis. Therefore, our conclusion is that the order of parenthesis matters, and that our task is to find the optimal order of parenthesis. At this point, we have several choices, one of which is to design a dynamic programming algorithm that will split the problem into overlapping problems and calculate the optimal arrangement of parenthesis. The dynamic programming solution is presented below. Let's call m[i,j] the minimum number of scalar multiplications needed to multiply a chain of matrices from matrix i to matrix j (i.e. A[i] × .... × A[j], i.e. i<=j). We split the chain at some matrix k, such that i <= k < j, and try to find out which combination produces minimum m[i,j]. The formula is: if i = j, m[i,j]= 0 if i < j, m[i,j]= min over all possible values of k (m[i,k]+m[k+1,j] + ${\displaystyle p_{i-1}*p_{k}*p_{j}}$) where k ranges from i to j − 1. • ${\displaystyle p_{i-1}}$ is the row dimension of matrix i, • ${\displaystyle p_{k}}$ is the column dimension of matrix k, • ${\displaystyle p_{j}}$ is the column dimension of matrix j. This formula can be coded as shown below, where input parameter "chain" is the chain of matrices, i.e. ${\displaystyle A_{1},A_{2},...A_{n}}$: function OptimalMatrixChainParenthesis(chain) n = length(chain) for i = 1, n m[i,i] = 0 //since it takes no calculations to multiply one matrix for len = 2, n for i = 1, n - len + 1 for j = i + 1, len -1 m[i,j] = infinity //so that the first calculation updates for k = i, j-1 q = m[i, k] + m[k+1, j] + ${\displaystyle p_{i-1}*p_{k}*p_{j}}$ if q < m[i, j] // the new order of parenthesis is better than what we had m[i, j] = q //update s[i, j] = k //record which k to split on, i.e. where to place the parenthesis So far, we have calculated values for all possible m[i, j], the minimum number of calculations to multiply a chain from matrix i to matrix j, and we have recorded the corresponding "split point"s[i, j]. For example, if we are multiplying chain A[1]×A[2]×A[3]×A[4], and it turns out that m[1, 3] = 100 and s[1, 3] = 2, that means that the optimal placement of parenthesis for matrices 1 to 3 is ${\ displaystyle (A_{1}\times A_{2})\times A_{3}}$ and to multiply those matrices will require 100 scalar calculation. This algorithm will produce "tables" m[, ] and s[, ] that will have entries for all possible values of i and j. The final solution for the entire chain is m[1, n], with corresponding split at s[1, n]. Unraveling the solution will be recursive, starting from the top and continuing until we reach the base case, i.e. multiplication of single matrices. Therefore, the next step is to actually split the chain, i.e. to place the parenthesis where they (optimally) belong. For this purpose we could use the following algorithm: function PrintOptimalParenthesis(s, i, j) if i = j print "A"i print "(" PrintOptimalParenthesis(s, i, s[i, j]) PrintOptimalParenthesis(s, s[i, j] + 1, j) ")" Of course, this algorithm is not useful for actual multiplication. This algorithm is just a user-friendly way to see what the result looks like. To actually multiply the matrices using the proper splits, we need the following algorithm: function MatrixChainMultiply(chain from 1 to n) // returns the final matrix, i.e. A1×A2×... ×An OptimalMatrixChainParenthesis(chain from 1 to n) // this will produce s[ . ] and m[ . ] "tables" OptimalMatrixMultiplication(s, chain from 1 to n) // actually multiply function OptimalMatrixMultiplication(s, i, j) // returns the result of multiplying a chain of matrices from Ai to Aj in optimal way if i < j // keep on splitting the chain and multiplying the matrices in left and right sides LeftSide = OptimalMatrixMultiplication(s, i, s[i, j]) RightSide = OptimalMatrixMultiplication(s, s[i, j] + 1, j) return MatrixMultiply(LeftSide, RightSide) else if i = j return Ai // matrix at position i print "error, i <= j must hold" function MatrixMultiply(A, B) // function that multiplies two matrices if columns(A) = rows(B) for i = 1, rows(A) for j = 1, columns(B) C[i, j] = 0 for k = 1, columns(A) C[i, j] = C[i, j] + A[i, k]*B[k, j] return C print "error, incompatible dimensions." The term dynamic programming was originally used in the 1940s by Richard Bellman to describe the process of solving problems where one needs to find the best decisions one after another. By 1953, he refined this to the modern meaning, referring specifically to nesting smaller decision problems inside larger decisions,^[16] and the field was thereafter recognized by the IEEE as a systems analysis and engineering topic. Bellman's contribution is remembered in the name of the Bellman equation, a central result of dynamic programming which restates an optimization problem in recursive form. Bellman explains the reasoning behind the term dynamic programming in his autobiography, Eye of the Hurricane: An Autobiography (1984, page 159). He explains: "I spent the Fall quarter (of 1950) at RAND. My first task was to find a name for multistage decision processes. An interesting question is, Where did the name, dynamic programming, come from? The 1950s were not good years for mathematical research. We had a very interesting gentleman in Washington named Wilson. He was Secretary of Defense, and he actually had a pathological fear and hatred of the word research. I’m not using the term lightly; I’m using it precisely. His face would suffuse, he would turn red, and he would get violent if people used the term research in his presence. You can imagine how he felt, then, about the term mathematical. The RAND Corporation was employed by the Air Force, and the Air Force had Wilson as its boss, essentially. Hence, I felt I had to do something to shield Wilson and the Air Force from the fact that I was really doing mathematics inside the RAND Corporation. What title, what name, could I choose? In the first place I was interested in planning, in decision making, in thinking. But planning, is not a good word for various reasons. I decided therefore to use the word “programming”. I wanted to get across the idea that this was dynamic, this was multistage, this was time-varying. I thought, let's kill two birds with one stone. Let's take a word that has an absolutely precise meaning, namely dynamic, in the classical physical sense. It also has a very interesting property as an adjective, and that is it's impossible to use the word dynamic in a pejorative sense. Try thinking of some combination that will possibly give it a pejorative meaning. It's impossible. Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it as an umbrella for my activities." The word dynamic was chosen by Bellman to capture the time-varying aspect of the problems, and because it sounded impressive.^[11] The word programming referred to the use of the method to find an optimal program, in the sense of a military schedule for training or logistics. This usage is the same as that in the phrases linear programming and mathematical programming, a synonym for mathematical optimization.^[17] The above explanation of the origin of the term is lacking. As Russell and Norvig in their book have written, referring to the above story: "This cannot be strictly true, because his first paper using the term (Bellman, 1952) appeared before Wilson became Secretary of Defense in 1953.”^[18] Also, there is a comment in a speech by Harold J. Kushner, where he remembers Bellman. Quoting Kushner as he speaks of Bellman: "On the other hand, when I asked him the same question, he replied that he was trying to upstage Dantzig's linear programming by adding dynamic. Perhaps both motivations were Algorithms that use dynamic programming[edit] See also[edit] Further reading[edit] External links[edit] This article's use of external links may not follow Wikipedia's policies or guidelines (March 2016) (Learn how and when to remove this template message)
{"url":"https://static.hlt.bme.hu/semantics/external/pages/elemz%C3%A9si_fa/en.wikipedia.org/wiki/Dynamic_programming.html","timestamp":"2024-11-03T18:37:07Z","content_type":"text/html","content_length":"386631","record_id":"<urn:uuid:dec48388-a1a8-42f0-a40a-8ffb8ac8c424>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00241.warc.gz"}
(x+12)(x-3) In Standard Form Expanding (x+12)(x-3) into Standard Form In mathematics, standard form for a quadratic expression is ax² + bx + c, where a, b, and c are constants. To express the product of (x+12)(x-3) in standard form, we need to expand the expression using the distributive property, also known as FOIL (First, Outer, Inner, Last). Steps to Expand the Expression 1. Multiply the First terms: 2. Multiply the Outer terms: 3. Multiply the Inner terms: 4. Multiply the Last terms: 5. Combine the like terms: □ x² - 3x + 12x - 36 = x² + 9x - 36 Standard Form Therefore, the expression (x+12)(x-3) in standard form is x² + 9x - 36.
{"url":"https://jasonbradley.me/page/(x%252B12)(x-3)-in-standard-form","timestamp":"2024-11-03T02:59:21Z","content_type":"text/html","content_length":"59799","record_id":"<urn:uuid:d252106d-a853-49df-b9a2-153dd795cdc6>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00251.warc.gz"}
'Equity versus fixed income: the predictive power of bank surveys' | Macrosynergy Equity versus fixed income: the predictive power of bank surveys # This notebook serves as an illustration of the points discussed in the post “Equity versus fixed income: the predictive power of bank surveys” available on the Macrosynergy website. Bank lending surveys help predict the relative performance of equity and duration positions. Signals of strengthening credit demand and easing lending conditions favor a stronger economy and expanding leverage, supporting equity returns. Signs of deteriorating credit demand and tightening credit supply bode for a weaker economy and more accommodative monetary policy, supporting duration returns. Empirical evidence for developed markets strongly supports these propositions. Since 2000, bank survey scores have been a significant predictor of equity versus duration returns. They helped create uncorrelated returns in both asset classes, as well as for a relative asset class book. This notebook provides the essential code required to replicate the analysis discussed in the post. The notebook covers the three main parts: • Get Packages and JPMaQS Data: This section is responsible for installing and importing the necessary Python packages that are used throughout the analysis. • Transformations and Checks: In this part, the notebook performs various calculations and transformations on the data to derive the relevant signals and targets used for the analysis, including constructing weighted average credit demand, average developed markets equity and duration returns, and relative equity vs. duration returns. • Value Checks: This is the most critical section, where the notebook calculates and implements the trading strategies based on the hypotheses tested in the post. Depending on the analysis, this section involves backtesting various trading strategies targeting equity, fixed income and relative returns. The strategies utilize the inflation indicators and other signals derived in the previous section. It’s important to note that while the notebook covers a selection of indicators and strategies used for the post’s main findings, there are countless other possible indicators and approaches that can be explored by users. Users can modify the code to test different hypotheses and strategies based on own research and ideas. Best of luck with your research! Get packages and JPMaQS data # This notebook primarily relies on the standard packages available in the Python data science stack. However, there is an additional package macrosynergy that is required for two purposes: • Downloading JPMaQS data: The macrosynergy package facilitates the retrieval of JPMaQS data, which is used in the notebook. • For the analysis of quantamental data and value propositions: The macrosynergy package provides functionality for performing quick analyses of quantamental data and exploring value propositions. For detailed information and a comprehensive understanding of the macrosynergy package and its functionalities, please refer to the “Introduction to Macrosynergy package” notebook on the Macrosynergy Quantamental Academy or visit the following link on Kaggle . # Uncomment below if the latest macrosynergy package is not installed ! pip install macrosynergy --upgrade""" '\n%%capture\n! pip install macrosynergy --upgrade' import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import warnings import os import macrosynergy.management as msm import macrosynergy.panel as msp import macrosynergy.signal as mss import macrosynergy.pnl as msn from macrosynergy.download import JPMaQSDownload The JPMaQS indicators we consider are downloaded using the J.P. Morgan Dataquery API interface within the macrosynergy package. This is done by specifying ticker strings, formed by appending an indicator category code to a currency area code <cross_section>. These constitute the main part of a full quantamental indicator ticker, taking the form DB(JPMAQS,<cross_section>_<category>,<info>) , where denotes the time series of information for the given cross-section and category. The following types of information are available: value giving the latest available values for the indicator eop_lag referring to days elapsed since the end of the observation period mop_lag referring to the number of days elapsed since the mean observation period grade denoting a grade of the observation, giving a metric of real-time information quality. After instantiating the JPMaQSDownload class within the macrosynergy.download module, one can use the download(tickers,start_date,metrics) method to easily download the necessary data, where tickers is an array of ticker strings, start_date is the first collection date to be considered and metrics is an array comprising the times series information to be downloaded. For more information see here # Cross-sections of interest cids_dm = ["EUR", "GBP", "JPY", "CAD", "USD"] cids = cids_dm # Quantamental categories of interest main = ["BLSDSCORE_NSA", # Demand "BLSCSCORE_NSA"] # Suppl econ = ["USDGDPWGT_SA_3YMA"] # economic context mark = [ ] # market context xcats = main + econ + mark # Extra tickers xtix = ["USD_GB10YXR_NSA"] # Resultant tickers tickers = [cid + "_" + xcat for cid in cids for xcat in xcats] + xtix print(f"Maximum number of tickers is {len(tickers)}") Maximum number of tickers is 26 JPMaQS indicators are conveniently grouped into 6 main categories: Economic Trends, Macroeconomic balance sheets, Financial conditions, Shocks and risk measures, Stylized trading factors, and Generic returns. Each indicator has a separate page with notes, description, availability, statistical measures, and timelines for main currencies. The description of each JPMaQS category is available under Macro quantamental academy . For tickers used in this notebook see Bank survey scores , Global production shares , Duration returns , and Equity index future returns . start_date = "2000-01-01" # end_date = "2023-05-01" # Retrieve credentials client_id: str = os.getenv("DQ_CLIENT_ID") client_secret: str = os.getenv("DQ_CLIENT_SECRET") with JPMaQSDownload(client_id=client_id, client_secret=client_secret) as dq: df = dq.download( # end_date=end_date, Downloading data from JPMaQS. Timestamp UTC: 2024-03-27 12:00:32 Connection successful! Requesting data: 100%|██████████| 6/6 [00:01<00:00, 4.95it/s] Downloading data: 100%|██████████| 6/6 [00:07<00:00, 1.18s/it] Time taken to download data: 9.64 seconds. Some expressions are missing from the downloaded data. Check logger output for complete list. 4 out of 104 expressions are missing. To download the catalogue of all available expressions and filter the unavailable expressions, set `get_catalogue=True` in the call to `JPMaQSDownload.download()`. Some dates are missing from the downloaded data. 3 out of 6325 dates are missing. dfx = df.copy().sort_values(["cid", "xcat", "real_date"]) <class 'pandas.core.frame.DataFrame'> Index: 149686 entries, 8602 to 149685 Data columns (total 7 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 real_date 149686 non-null datetime64[ns] 1 cid 149686 non-null object 2 xcat 149686 non-null object 3 eop_lag 149686 non-null float64 4 grading 149686 non-null float64 5 mop_lag 149686 non-null float64 6 value 149686 non-null float64 dtypes: datetime64[ns](1), float64(4), object(2) memory usage: 9.1+ MB Availability # It is important to assess data availability before conducting any analysis. It allows identifying any potential gaps or limitations in the dataset, which can impact the validity and reliability of analysis and ensure that a sufficient number of observations for each selected category and cross-section is available as well as determining the appropriate time periods for analysis. msm.check_availability(df, xcats=main, cids=cids, missing_recent=True) msm.check_availability(df, xcats=econ + mark, cids=cids, missing_recent=True) Transformations and checks # In this section, we conduct straightforward calculations and transformations on the data to derive the essential signals and targets necessary for the analysis. Features # The presented chart illustrates the weighted average of credit demand and supply conditions in developed markets, derived from bank lending surveys. Specifically, it incorporates the credit demand z-score ( BLSDSCORE_NSA ) and credit supply z-score ( BLSCSCORE_NSA ) for developed countries, namely EUR, GBP, JPY, CAD, and USD. Weights are assigned based on the respective proportions of global GDP and industrial production, calculated as a three-year moving average. Subsequently, both the aggregated credit demand z-score and credit supply z-score are grouped under the identifier GDM. To visualize these combined indicators, we utilize the view_timelines() function from the macrosynergy package. You can find more information about this function here cidx = cids_dm xcatx = ["BLSDSCORE_NSA", "BLSCSCORE_NSA"] dfa = pd.DataFrame(columns=list(dfx.columns)) for xc in xcatx: dfaa = msp.linear_composite( dfa = msm.update_df(dfa, dfaa) dfx = msm.update_df(dfx, dfa) cidx = ["GDM"] sdate = "2000-01-01" title="Quantamental bank lending scores, developed markets average, information states", "Survey score of loan demand", "Survey score of loan standards (supply conditions)", Targets # Equity and duration returns # In this segment, we amalgamate the volatility-targeted returns of both equity and fixed income from developed countries into a composite basket, where each country contributes equally. The initial list of countries, downloaded and compiled in the cids_dm list, encompasses EUR, GBP, JPY, CAD, and USD. The resulting new time series will be labeled as GDM (Global developed markets) and will be based on the respective averages of two crucial indicators: • Volatility-targeted equity returns, EQXR_VT10 These returns signify the near-future performance of major equity indices, including the Standard and Poor’s 500 Composite in USD, EURO STOXX 50 in EUR, Nikkei 225 Stock Average in JPY, FTSE 100 in GBP, and the Toronto Stock Exchange 60 Index in CAD. • Volatility-targeted duration returns, DU05YXR_VT10 . These returns reflect the performance of 5-year interest rate swap fixed receiver positions, assuming a monthly roll. Alternatively, one can consider using just 3 main currencies USD, EUR, and JPY. The new series will get the cross-sectional label G3 . To visualize these combined indicators, we utilize the view_timelines() function from the macrosynergy package. You can find more information about this function here xcatx = ["EQXR_VT10", "DU05YXR_VT10"] dict_bsks = { "GDM": cids_dm, "G3": ["EUR", "JPY", "USD"], dfa = pd.DataFrame(columns=list(dfx.columns)) for xc in xcatx: for key, value in dict_bsks.items(): dfaa = msp.linear_composite( dfa = msm.update_df(dfa, dfaa) dfx = msm.update_df(dfx, dfa) cidx = ["GDM"] sdate = "2000-01-01" title="Vol-targeted equity and duration basket returns, % cumulative, no compounding", "Equity index future returns, 10% vol target, DM5 basket", "5-year IRS receiver returns, 10% vol target, DM5 basket", Equity versus duration returns # In this notebook we look at the directional and at the relative returns for developed markets. We establish a fresh metric EQvDUXR_VT10 , defined as the difference between EQXR_VT10 and DU05YXR_VT10 . Subsequently, we consolidate individual country indicators into a unified metric, employing equal weighting for each cross-section. Similar to our previous approach, this newly combined metric, EQvDUXR_VT10 , is classified under the cross-sectional identifier GDM . cidx = cids_dm calcs = [ "EQvDUXR_VT10 = EQXR_VT10 - DU05YXR_VT10", dfa = msp.panel_calculator(df, calcs=calcs, cids=cidx) dfx = msm.update_df(dfx, dfa) xcatx = ["EQvDUXR_VT10"] dict_bsks = { "GDM": cids_dm, "G3": ["EUR", "JPY", "USD"], dfa = pd.DataFrame(columns=list(dfx.columns)) for xc in xcatx: for key, value in dict_bsks.items(): dfaa = msp.linear_composite( dfa = msm.update_df(dfa, dfaa) dfx = msm.update_df(dfx, dfa) Value checks # In this part of the analysis, the notebook calculates the naive PnLs (Profit and Loss) for directional equity, fixed income, and relative strategies using bank lending scores. The PnLs are calculated based on simple trading strategies that utilize the bank lending scores as signals (no regression analysis is involved). The strategies involve going long (buying) or short (selling) on respective asset positions based purely on the direction of the signals. To evaluate the performance of these strategies, the notebook computes various metrics and ratios, including: • Correlation: Measures the relationship between the inflation-based strategy returns and the actual returns. Positive correlations indicate that the strategy moves in the same direction as the market, while negative correlations indicate an opposite movement. • Accuracy Metrics: These metrics assess the accuracy of inflation-based strategies in predicting market movements. Common accuracy metrics include accuracy rate, balanced accuracy, precision, etc. • Performance Ratios: Various performance ratios, such as Sharpe ratio, Sortino ratio, Max draws etc. The notebook compares the performance of these simple inflation-based strategies with the long-only performance of the respective asset classes. It’s important to note that the analysis deliberately disregards transaction costs and risk management considerations. This is done to provide a more straightforward comparison of the strategies’ raw performance without the additional complexity introduced by transaction costs and risk management, which can vary based on trading size, institutional rules, and regulations. The analysis in the post and sample code in the notebook is a proof of concept only, using the simplest design. Duration returns # In this section, we test a simple idea that bank survey scores negatively predict fixed receiver returns in the interest rate swap market. Specs and panel test # bls = ["BLSDSCORE_NSA", targ = "DU05YXR_VT10" cidx = ["GDM"] dict_dubk = { "df": dfx, "sigs": sigs, "targ": targ, "cidx": cidx, "black": None, "srr": None, "pnls": None, We utilize the CategoryRelations() function from the macrosynergy package to visualize the connection between the bank survey credit demand score BLSDSCORE_NSA and the subsequent IRS (Interest Rate Swap) return. As anticipated, the visualization confirms a negative and statistically significant relationship at a 5% significance level. This finding aligns with the expected relationship between credit demand scores and IRS returns. You can access more details on this analysis by referring to the provided link dix = dict_dubk dfr = dix["df"] sig = dix["sigs"][0] targ = dix["targ"] cidx = dix["cidx"] crx = msp.CategoryRelations( xcats=[sig, targ], xcat_aggs=["last", "sum"], xcat_trims=[None, None], coef_box="lower left", xlab="Bank lending survey, credit demand z-score", ylab="5-year IRS return, vol-targeted at 10%, next month, %", title="Bank survey credit demand score and subsequent IRS returns of developed market basket", size=(10, 6), Conducting a parallel analysis by employing the alternate bank survey metric, the credit supply score labeled as BLSCSCORE_NSA , and subsequently examining the IRS (Interest Rate Swap) returns reveals a notably weaker and less statistically significant relationship. The underlying reasons for this diminished correlation are elaborated upon in the accompanying post. dix = dict_dubk dfr = dix["df"] sig = dix["sigs"][1] targ = dix["targ"] cidx = dix["cidx"] crx = msp.CategoryRelations( xcats=[sig, targ], xcat_aggs=["last", "sum"], xcat_trims=[None, None], coef_box="lower left", xlab="Bank lending survey, credit conditions z-score (higher = easier)", ylab="5-year IRS return, vol-targeted at 10%, next month, %", title="Bank survey credit conditions and subsequent IRS returns of developed market basket", size=(10, 6), Accuracy and correlation check # The SignalReturnRelations() class from the macrosynergy.signal module is specifically designed to analyze, visualize, and compare the relationships between panels of trading signals and panels of subsequent returns. bls = ["BLSDSCORE_NSA", targ = "DU05YXR_VT10" cidx = ["GDM"] srr = mss.SignalReturnRelations( sig_neg=[True, True], multiple_relations_table() is a method that compares multiple signal-return relations in one table. It is useful to compare the performance of different signals against the same return series (more than one possible financial return) and multiple possible frequencies. dix["srr"] = srr srrx = dix["srr"] │ │ │ │ │accuracy│bal_accuracy│pos_sigr│pos_retr│pos_prec│neg_prec│pearson │pearson_pval│kendall │kendall_pval│ auc │ │ Return │ Signal │Frequency│Aggregation│ │ │ │ │ │ │ │ │ │ │ │ │DU05YXR_VT10│BLSCSCORE_NSA_NEG│ M │ last │0.493103│0.490948 │0.520690│0.551724│0.543046│0.438849│0.070829│0.229177 │0.034178│0.385575 │0.490865│ │ ├─────────────────┼─────────┼───────────┼────────┼────────────┼────────┼────────┼────────┼────────┼────────┼────────────┼────────┼────────────┼────────┤ │ │BLSDSCORE_NSA_NEG│ M │ last │0.531034│0.529286 │0.517241│0.551724│0.580000│0.478571│0.116411│0.047638 │0.082055│0.037239 │0.529567│ Naive PnL # The NaivePnl() class is specifically designed to offer a quick and straightforward overview of a simplified Profit and Loss (PnL) profile associated with a set of trading signals. The term “naive” is used because the methods within this class do not factor in transaction costs or position limitations, which may include considerations related to risk management. This omission is intentional because the impact of costs and limitations varies widely depending on factors such as trading size, institutional rules, and regulatory requirements. As its primary objective, the class focuses on tracking the average IRS (Interest Rate Swap) return for developed markets, specifically the ‘DU05YXR_VT10,’ alongside the trading signals BLSDSCORE_NSA (credit demand z-score) and BLSCSCORE_NSA (credit supply z-score). It accommodates both binary PnL calculations, where signals are simplified into long (1) or short (-1) positions, and proportionate PnL calculations. For more in-depth information regarding the `NaivePnl() class and its functionalities, you can refer to the provided link here dix = dict_dubk sigx = dix["sigs"] naive_pnl = msn.NaivePnL( dict_pnls = { "PZN0": {"sig_add": 0, "sig_op": "zn_score_pan"}, "PZN1": {"sig_add": 1, "sig_op": "zn_score_pan"}, "BIN0": {"sig_add": 0, "sig_op": "binary"}, "BIN1": {"sig_add": 1, "sig_op": "binary"}, for key, value in dict_pnls.items(): for sig in sigx: pnl_name=sig + "_" + key, naive_pnl.make_long_pnl(vol_scale=10, label="Long only") dix["pnls"] = naive_pnl The plot_pnls() method of the NaivePnl() class is used to plot a line chart of cumulative PnL dix = dict_dubk sigx = dix["sigs"] naive_pnl = dix["pnls"] pnls = [sig + type for type in ["_PZN0", "_BIN0"] for sig in sigx] dict_labels = {"BLSDSCORE_NSA_PZN0": "based on credit demand score, proportionate", "BLSCSCORE_NSA_PZN0": "based on credit conditions score, proportionate", "BLSDSCORE_NSA_BIN0": "based on credit demand score, binary", "BLSCSCORE_NSA_BIN0": "based on credit conditions score, binary" title="Naive PnLs for IRS baskets, based on survey scores, no bias", figsize=(16, 8), The method evaluate_pnls() returns a small dataframe of key PnL statistics. dix = dict_dubk sigx = dix["sigs"] naive_pnl = dix["pnls"] pnls = [sig + type for sig in sigx for type in ["_PZN0", "_PZN1", "_BIN0", "_BIN1"]] + [ "Long only" df_eval = naive_pnl.evaluate_pnls( │ │Return (pct ar)│St. Dev. (pct ar)│Sharpe Ratio│Sortino Ratio│Max 21-day draw│Max 6-month draw│Traded Months│ │ xcat │ │ │ │ │ │ │ │ │BLSCSCORE_NSA_BIN0 │1.311453 │10.0 │0.131145 │0.185839 │-13.753375 │-20.428536 │291 │ │BLSCSCORE_NSA_BIN1 │3.182654 │10.0 │0.318265 │0.451375 │-19.00006 │-26.974585 │291 │ │BLSCSCORE_NSA_PZN0 │3.222786 │10.0 │0.322279 │0.464922 │-20.527223 │-24.987429 │291 │ │BLSCSCORE_NSA_PZN1 │4.116934 │10.0 │0.411693 │0.592551 │-18.803465 │-24.342558 │291 │ │BLSDSCORE_NSA_BIN0 │3.920496 │10.0 │0.39205 │0.56325 │-12.526584 │-20.433878 │291 │ │BLSDSCORE_NSA_BIN1 │5.021935 │10.0 │0.502193 │0.721779 │-18.183052 │-24.693086 │291 │ │BLSDSCORE_NSA_PZN0 │4.69431 │10.0 │0.469431 │0.691568 │-19.465818 │-26.96425 │291 │ │BLSDSCORE_NSA_PZN1 │5.617326 │10.0 │0.561733 │0.814085 │-15.901654 │-22.449896 │291 │ │ Long only │3.125464 │10.0 │0.312546 │0.441982 │-13.742726 │-28.386188 │291 │ Equity returns # Similar to the previous chapter, we proceed to explore the connections between bank lending survey scores and consequent equity returns, specifically those targeted at 10% volatility. We initiate this analysis with the bank survey demand score and its correlation with consequent monthly equity index future returns. Evidently, the relationship between these variables exhibits a positive and statistically significant relationship. Specs and panel test # bls = ["BLSDSCORE_NSA", sigs = bls targ = "EQXR_VT10" cidx = ["GDM"] dict_eqbk = { "df": dfx, "sigs": sigs, "targ": targ, "cidx": cidx, "black": None, "srr": None, "pnls": None, dix = dict_eqbk dfr = dix["df"] sig = dix["sigs"][0] targ = dix["targ"] cidx = dix["cidx"] crx = msp.CategoryRelations( xcats=[sig, targ], xcat_aggs=["last", "sum"], xcat_trims=[None, None], coef_box="lower left", xlab="Bank lending survey, credit demand z-score", ylab="Equity index future return, vol-targeted at 10%, next month, %", title="Bank survey credit demand score and subsequent equity returns of developed market basket", size=(10, 6), Predictive correlation has been even a bit stronger between bank lending conditions BLSCSCORE_NSA and subsequent monthly equity index returns. dix = dict_eqbk dfr = dix["df"] sig = dix["sigs"][1] targ = dix["targ"] cidx = dix["cidx"] crx = msp.CategoryRelations( xcats=[sig, targ], xcat_aggs=["last", "sum"], xcat_trims=[None, None], coef_box="lower left", xlab="Bank lending survey, credit conditions z-score (higher = easier)", ylab="Equity index future return, vol-targeted at 10%, next month, %", title="Bank survey credit supply score and subsequent equity returns of developed market basket", size=(10, 6), Accuracy and correlation check # The SignalReturnRelations() class from the macrosynergy.signal module is specifically designed to analyze, visualize, and compare the relationships between panels of trading signals and panels of subsequent returns. dix = dict_eqbk dfr = dix["df"] sigs = dix["sigs"] targ = dix["targ"] cidx = dix["cidx"] srr = mss.SignalReturnRelations( dix["srr"] = srr dix = dict_eqbk srrx = dix["srr"] multiple_relations_table() is a method that compares multiple signal-return relations in one table. It is useful to compare the performance of different signals against the same return series (more than one possible financial return) and multiple possible frequencies. │ │ │ │ │accuracy│bal_accuracy│pos_sigr│pos_retr│pos_prec│neg_prec│pearson │pearson_pval│kendall │kendall_pval│ auc │ │ Return │ Signal │Frequency│Aggregation│ │ │ │ │ │ │ │ │ │ │ │ │EQXR_VT10│BLSCSCORE_NSA│ M │ last │0.551724│0.556387 │0.479310│0.610345│0.669065│0.443709│0.163962│0.005125 │0.082151│0.037019 │0.559172│ │ ├─────────────┼─────────┼───────────┼────────┼────────────┼────────┼────────┼────────┼────────┼────────┼────────────┼────────┼────────────┼────────┤ │ │BLSDSCORE_NSA│ M │ last │0.555172│0.559048 │0.482759│0.610345│0.671429│0.446667│0.132989│0.023512 │0.075134│0.056466 │0.561997│ Naive PnL # As before, with fixed income returns, we create naive PnL using bank surveys as signals and equity returns as target. Please see here for details NaivePnl() class dix = dict_eqbk dfr = dix["df"] sigx = dix["sigs"] targ = dix["targ"] cidx = dix["cidx"] naive_pnl = msn.NaivePnL( dict_pnls = { "PZN0": {"sig_add": 0, "sig_op": "zn_score_pan"}, "PZN1": {"sig_add": 1, "sig_op": "zn_score_pan"}, "BIN0": {"sig_add": 0, "sig_op": "binary"}, "BIN1": {"sig_add": 1, "sig_op": "binary"}, for key, value in dict_pnls.items(): for sig in sigx: pnl_name=sig + "_" + key, naive_pnl.make_long_pnl(vol_scale=10, label="Long only") dix["pnls"] = naive_pnl The plot_pnls() method of the NaivePnl() class is used to plot a line chart of cumulative PnL dix = dict_eqbk sigx = dix["sigs"] naive_pnl = dix["pnls"] pnls = [sig + type for type in ["_PZN0", "_BIN0"] for sig in sigx] dict_labels = {"BLSDSCORE_NSA_PZN0": "based on credit demand score, proportionate", "BLSCSCORE_NSA_PZN0": "based on credit conditions score, proportionate", "BLSDSCORE_NSA_BIN0": "based on credit demand score, binary", "BLSCSCORE_NSA_BIN0": "based on credit conditions score, binary" title="Naive PnLs for equity index baskets, based on survey scores, no bias", figsize=(16, 8), The below PnLs approximately add up returns of long-only and survey-based positions in equal weights to produce long-biased portfolios. dix = dict_eqbk sigx = dix["sigs"] naive_pnl = dix["pnls"] pnls = [sig + type for type in ["_PZN0", "_BIN0"] for sig in sigx] dix = dict_eqbk sigx = dix["sigs"] naive_pnl = dix["pnls"] pnls = sorted([sig + type for sig in sigx for type in ["_PZN1", "_BIN1"]]) + ["Long only"] dict_labels = {"BLSDSCORE_NSA_PZN1": "based on credit demand score, proportionate", "BLSCSCORE_NSA_PZN1": "based on credit conditions score, proportionate", "BLSDSCORE_NSA_BIN1": "based on credit demand score, binary", "BLSCSCORE_NSA_BIN1": "based on credit conditions score, binary", "Long only": "Long only" title="Naive PnLs for equity index baskets, based on survey scores, long bias", figsize=(16, 8), The method evaluate_pnls() returns a small dataframe of key PnL statistics. dix = dict_eqbk sigx = dix["sigs"] naive_pnl = dix["pnls"] pnls = [sig + type for sig in sigx for type in ["_PZN0", "_PZN1", "_BIN0", "_BIN1"]] + [ "Long only" df_eval = naive_pnl.evaluate_pnls( │ │Return (pct ar)│St. Dev. (pct ar)│Sharpe Ratio│Sortino Ratio│Max 21-day draw│Max 6-month draw│Traded Months│ │ xcat │ │ │ │ │ │ │ │ │BLSCSCORE_NSA_BIN0 │2.814373 │10.0 │0.281437 │0.40505 │-12.065808 │-16.593084 │291 │ │BLSCSCORE_NSA_BIN1 │6.549787 │10.0 │0.654979 │0.985328 │-16.667449 │-18.988845 │291 │ │BLSCSCORE_NSA_PZN0 │2.163101 │10.0 │0.21631 │0.316375 │-22.624486 │-28.500536 │291 │ │BLSCSCORE_NSA_PZN1 │6.201297 │10.0 │0.62013 │0.870014 │-14.010911 │-15.23333 │291 │ │BLSDSCORE_NSA_BIN0 │3.603411 │10.0 │0.360341 │0.517412 │-12.066978 │-15.953213 │291 │ │BLSDSCORE_NSA_BIN1 │7.09783 │10.0 │0.709783 │1.065741 │-16.455291 │-18.747138 │291 │ │BLSDSCORE_NSA_PZN0 │2.405866 │10.0 │0.240587 │0.347736 │-20.839912 │-29.168401 │291 │ │BLSDSCORE_NSA_PZN1 │5.725593 │10.0 │0.572559 │0.79515 │-18.162164 │-18.961423 │291 │ │ Long only │5.114199 │10.0 │0.51142 │0.699751 │-23.703832 │-21.00946 │291 │ Equity versus duration returns # In the final part of Value checks we look at the relation between bank survey scores and volatility-targeted equity versus duration returns for the developed market basket. The target will be the earlier created difference between equity and duration return, 10% volatility targeted ( EQvDUXR_VT10 ) Specs and panel test # sigs = bls targ = "EQvDUXR_VT10" cidx = [ dict_edbk = { "df": dfx, "sigs": sigs, "targ": targ, "cidx": cidx, "black": None, "srr": None, "pnls": None, dix = dict_edbk dfr = dix["df"] sig = dix["sigs"][0] targ = dix["targ"] cidx = dix["cidx"] crx = msp.CategoryRelations( xcats=[sig, targ], xcat_aggs=["last", "sum"], xcat_trims=[None, None], coef_box="lower left", xlab="Bank lending survey, credit demand z-score", ylab="Equity versus IRS returns (both vol-targeted), next month, %", title="Bank survey credit demand and subsequent equity versus IRS returns of developed market basket", size=(10, 6), Conducting a parallel analysis by employing the alternate bank survey metric, the credit supply score labeled as BLSCSCORE_NSA , and subsequently examining the “EQvDUXR_VT10” return: dix = dict_edbk dfr = dix["df"] sig = dix["sigs"][1] targ = dix["targ"] cidx = dix["cidx"] crx = msp.CategoryRelations( xcats=[sig, targ], xcat_aggs=["last", "sum"], xcat_trims=[None, None], coef_box="lower left", xlab="Bank lending survey, credit conditions z-score (higher = easier)", ylab="Equity versus IRS returns (both vol-targeted), next month, %", title="Bank survey credit conditions and subsequent equity versus IRS returns of developed market basket", size=(10, 6), Accuracy and correlation check # The SignalReturnRelations() class from the macrosynergy.signal module is specifically designed to analyze, visualize, and compare the relationships between panels of trading signals and panels of subsequent returns. dix = dict_edbk dfr = dix["df"] sigs = dix["sigs"] targ = dix["targ"] cidx = dix["cidx"] srr = mss.SignalReturnRelations( dix["srr"] = srr dix = dict_edbk srrx = dix["srr"] │ │ │ │ │accuracy│bal_accuracy│pos_sigr│pos_retr│pos_prec│neg_prec│pearson│pearson_pval│kendall│kendall_pval│ auc │ │ Return │ Signal │Frequency│Aggregation│ │ │ │ │ │ │ │ │ │ │ │ │EQvDUXR_VT10│BLSCSCORE_NSA│ M │ last │0.555 │0.557 │0.479 │0.545 │0.604 │0.51 │0.141 │0.017 │0.077 │0.051 │0.557│ │ ├─────────────┼─────────┼───────────┼────────┼────────────┼────────┼────────┼────────┼────────┼───────┼────────────┼───────┼────────────┼─────┤ │ │BLSDSCORE_NSA│ M │ last │0.566 │0.567 │0.483 │0.545 │0.614 │0.52 │0.155 │0.008 │0.107 │0.007 │0.568│ Naive PnL # As before, we create naive PnL using bank surveys as signals and equity returns as target. Please see here for details NaivePnl() class dix = dict_edbk dfr = dix["df"] sigx = dix["sigs"] targ = dix["targ"] cidx = dix["cidx"] naive_pnl = msn.NaivePnL( dict_pnls = { "PZN0": {"sig_add": 0, "sig_op": "zn_score_pan"}, "PZN1": {"sig_add": 1, "sig_op": "zn_score_pan"}, "BIN0": {"sig_add": 0, "sig_op": "binary"}, "BIN1": {"sig_add": 1, "sig_op": "binary"}, for key, value in dict_pnls.items(): for sig in sigx: pnl_name=sig + "_" + key, naive_pnl.make_long_pnl(vol_scale=10, label="Long only") dix["pnls"] = naive_pnl The plot_pnls() method of the NaivePnl() class is used to plot a line chart of cumulative PnL dix = dict_edbk sigx = dix["sigs"] naive_pnl = dix["pnls"] pnls = [sig + type for type in ["_PZN0", "_BIN0"] for sig in sigx] dict_labels = {"BLSDSCORE_NSA_PZN0": "based on credit demand score, proportionate", "BLSCSCORE_NSA_PZN0": "based on credit conditions score, proportionate", "BLSDSCORE_NSA_BIN0": "based on credit demand score, binary", "BLSCSCORE_NSA_BIN0": "based on credit conditions score, binary", title="Naive PnLs for equity versus IRS baskets, based on survey scores, no bias", figsize=(16, 8), dix = dict_edbk sigx = dix["sigs"] naive_pnl = dix["pnls"] pnls = sorted([sig + type for sig in sigx for type in ["_PZN1", "_BIN1"]]) + ["Long only"] dict_labels={"BLSDSCORE_NSA_PZN1": "based on credit demand score, proportionate", "BLSCSCORE_NSA_PZN1": "based on credit conditions score, proportionate", "BLSDSCORE_NSA_BIN1": "based on credit demand score, binary", "BLSCSCORE_NSA_BIN1": "based on credit conditions score, binary", "Long only": "Always long equity versus fixed income" title="Naive PnLs for equity versus IRS baskets, based on survey scores, long equity bias", figsize=(16, 8), dix = dict_edbk sigx = dix["sigs"] naive_pnl = dix["pnls"] pnls = [sig + type for sig in sigx for type in ["_PZN0", "_PZN1", "_BIN0", "_BIN1"]] + [ "Long only" df_eval = naive_pnl.evaluate_pnls( │ │Return (pct ar)│St. Dev. (pct ar)│Sharpe Ratio│Sortino Ratio│Max 21-day draw│Max 6-month draw│Traded Months│ │ xcat │ │ │ │ │ │ │ │ │BLSCSCORE_NSA_BIN0 │2.568152 │10.0 │0.256815 │0.367062 │-13.520579 │-20.63655 │291 │ │BLSCSCORE_NSA_BIN1 │3.124153 │10.0 │0.312415 │0.469558 │-18.432993 │-28.134401 │291 │ │BLSCSCORE_NSA_PZN0 │3.262396 │10.0 │0.32624 │0.474418 │-17.65661 │-29.35528 │291 │ │BLSCSCORE_NSA_PZN1 │4.058176 │10.0 │0.405818 │0.568846 │-15.795265 │-21.980484 │291 │ │BLSDSCORE_NSA_BIN0 │4.640368 │10.0 │0.464037 │0.668771 │-13.524447 │-20.642454 │291 │ │BLSDSCORE_NSA_BIN1 │4.712011 │10.0 │0.471201 │0.714574 │-18.165055 │-27.725446 │291 │ │BLSDSCORE_NSA_PZN0 │4.373259 │10.0 │0.437326 │0.645153 │-16.454033 │-31.283752 │291 │ │BLSDSCORE_NSA_PZN1 │4.354862 │10.0 │0.435486 │0.61721 │-19.550447 │-25.152307 │291 │ │ Long only │1.35191 │10.0 │0.135191 │0.186025 │-19.959885 │-24.990098 │291 │
{"url":"https://macrosynergy.com/academy/notebooks/equity-versus-fixed-income-the-predictive-power-of-bank-surveys/","timestamp":"2024-11-09T16:08:19Z","content_type":"text/html","content_length":"341370","record_id":"<urn:uuid:e1424af3-ca11-43e3-839b-5f2b988e88f6>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00678.warc.gz"}
Let's Practice Some Logical Reasoning Questions This quiz contains questions related to logical reasoning and the quiz tests the reasoning ability of student. The quiz contains different types of questions like making the sequences, finding the missing words and so on which will help the students to increase their learning capacity.
{"url":"https://merithub.com/quiz/lets-practice-some-logical-reasoning-questions-c8sqvkpnuvtf434a60ng","timestamp":"2024-11-11T03:03:47Z","content_type":"text/html","content_length":"40468","record_id":"<urn:uuid:162a3ccd-bd87-4d5e-afae-9c2b5ec05371>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00233.warc.gz"}
random.polychor.pa: A Parallel Analysis with Polychoric Correlation Matrices The Function performs a parallel analysis using simulated polychoric correlation matrices. The nth-percentile of the eigenvalues distribution obtained from both the randomly generated and the real data polychoric correlation matrices is returned. A plot comparing the two types of eigenvalues (real and simulated) will help determine the number of real eigenvalues that outperform random data. The function is based on the idea that if real data are non-normal and the polychoric correlation matrix is needed to perform a Factor Analysis, then the Parallel Analysis method used to choose a non-random number of factors should also be based on randomly generated polychoric correlation matrices and not on Pearson correlation matrices. Random data sets are simulated assuming or a uniform or a multinomial distribution or via the bootstrap method of resampling (i.e., random permutations of cases). Also Multigroup Parallel analysis is made available for random (uniform and multinomial distribution and with or without difficulty factor) and bootstrap methods. An option to choose between default or full output is also available as well as a parameter to print Fit Statistics (Chi-squared, TLI, RMSEA, RMR and BIC) for the factor solutions indicated by the Parallel Analysis. Also weighted correlation matrices may be considered for PA. Version: 1.1.4-5 Depends: psych, nFactors, boot Imports: MASS, mvtnorm, sfsmisc Published: 2023-07-15 DOI: 10.32614/CRAN.package.random.polychor.pa Author: Fabio Presaghi [aut, cre], Marta Desimoni [ctb] Maintainer: Fabio Presaghi <fabio.presaghi at uniroma1.it> License: GPL-2 | GPL-3 [expanded from: GPL (≥ 2)] NeedsCompilation: no Citation: random.polychor.pa citation info CRAN checks: random.polychor.pa results Reference manual: random.polychor.pa.pdf Package source: random.polychor.pa_1.1.4-5.tar.gz Windows r-devel: random.polychor.pa_1.1.4-5.zip, r-release: random.polychor.pa_1.1.4-5.zip, r-oldrel: random.polychor.pa_1.1.4-5.zip macOS binaries: r-release (arm64): random.polychor.pa_1.1.4-5.tgz, r-oldrel (arm64): random.polychor.pa_1.1.4-5.tgz, r-release (x86_64): random.polychor.pa_1.1.4-5.tgz, r-oldrel (x86_64): Old sources: random.polychor.pa archive Please use the canonical form https://CRAN.R-project.org/package=random.polychor.pa to link to this page.
{"url":"https://cran.opencpu.org/web/packages/random.polychor.pa/index.html","timestamp":"2024-11-03T06:50:20Z","content_type":"text/html","content_length":"8705","record_id":"<urn:uuid:600b970e-d399-446c-91d6-aab714cad337>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00452.warc.gz"}
[tex4ht] usage with pdftex Martin Weis martin.weis.newsadress at gmx.de Mon Aug 2 18:37:30 CEST 2010 Am 31.07.2010 01:50, schrieb D. R. Evans: > Karl Berry said the following at 07/30/2010 04:44 PM : The only real problem with > that would be that at some point I'd be bound to make a mistake and forget > to synchronize them. You can use either package ifthen and define a bool to switch content on/off, or use package comment and define your own begin/end blocks. I usually do that for the images to not convert them by tex4ht, as this takes time and renders them to bitmaps, which I usually dont need... ifthen goes like this: % preamble \newboolean{withpdfspecials} % declaration \setboolean{withpdfspecials}{false} % set false % \setboolean{withpdfspecials}{true} % set true % in document { %then clause % The stuff you want to include % \pdf.... }{ %else clause % leave this empty/alternate content comment like this: % preamble % \excludecomment{todo} % in document TODO: change the following part the comment environment might not work within other environments (e.g. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 262 bytes Desc: OpenPGP digital signature URL: <http://tug.org/pipermail/tex4ht/attachments/20100802/d2aa5414/attachment.bin> More information about the tex4ht mailing list
{"url":"https://tug.org/pipermail/tex4ht/2010q3/000135.html","timestamp":"2024-11-11T20:05:52Z","content_type":"text/html","content_length":"4013","record_id":"<urn:uuid:d7921ce8-9f00-402a-bb3f-6aaf29bf46e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00021.warc.gz"}
How to handle SOS type 1 and SOS type 2 constraints? Some constraints of my model involve general constraints: A = min[B, x] & B = piecewise linear function of x So, Gurobi converts these constraints into SOS type 1 and type 2 constraints during the presolve stage, then moves to MIPNode stage. The problem is that Gurobi takes a long time for the MIPNode stage. (For the size of the instance, please refers to the statistics below) Optimize a model with 16904 rows, 25632 columns and 57162 nonzeros Model has 12996 general constraints Variable types: 25632 continuous, 0 integer (0 binary) Coefficient statistics: Matrix range [1e-03, 1e+04] Objective range [7e+01, 7e+04] Bounds range [1e+05, 7e+06] RHS range [6e+00, 7e+03] PWLCon x range [3e+00, 1e+06] PWLCon y range [4e+00, 4e+05] Presolve removed 25423 rows and 46696 columns (presolve time = 5s) ... Presolve added 13191 rows and 46212 columns Presolve time: 5.76s Presolved: 30095 rows, 71844 columns, 213165 nonzeros Presolved model has 9443 SOS constraint(s) Variable types: 71262 continuous, 582 integer (582 binary) Root relaxation presolve removed 3601 rows and 11915 columns Root relaxation presolved: 26494 rows, 59929 columns, 174117 nonzeros My question is as follows: 1) Any better formulation for my general constraints for A & B? (e.g., using Big-M) 2) Any SOS-dedicated cuts supported in Gurobi? I'm reading some papers for SOS cuts but I am wondering if such cuts already have been applied to Gurobi as these papers were published more than 10 years ago. Eunseok Kim • Hi, 1) Any better formulation for my general constraints for A & B? (e.g., using Big-M) Since it is generally more efficient to capture SOS constraints as linear constraints, the Gurobi presolve would automatically reformulate the SOS constraints into their binary forms using big-M values if a very large M is not required. The user has control over this behavior using the parameters PreSOS1BigM and PreSOS2BigM. 2) Any SOS-dedicated cuts supported in Gurobi? I'm reading some papers for SOS cuts but I am wondering if such cuts already have been applied to Gurobi as these papers were published more than 10 years ago. Each general constraint in Gurobi has an equivalent MIP formulation including auxiliary variables and linear and SOS constraints. Adding such constraints to a continuous model would transform it to a MIP where the SOS constraints are not included in the root LP relaxation and are addressed with branching rules. If an SOS constraint is not reformulated into a binary form during presolve, it is not included in the root LP relaxation and does not participate in cutting plane generation. Which paper are you referring to? Best regards, • Hi Maliheh, Thanks for your prompt response. According to the log, it seems like Gurobi converted general constraints into SOS constraints. 1) Does it mean that Gurobi converted the general constraints into binary form with big-M during presolve stage? or left it as SOS constraints? I'm confused about whether these general constraints were included in the root LP relaxation or not. 2) I have read the description of parameters PreSOS1BigM and PreSOS2BigM. It tells that large Big-M may cause numerical issues. But I'm unsure of the appropriate values for these constraints as I can't figure out the form of SOS constraints converted by Gurobi presolve. Could you provide further explanation on this topic? I guess that if I choose too small values, then the model will become infeasible. 3) List of papers I'm referring to: - M. Zhao and I. R. de Farias Jr. The Piecewise Linear Optimization Polytope: New Inequalities and Intersection with Semi-Continuous Constraints. Mathematical Programming, pages 1-39, 2012. - I. de Farias, E. L. Johnson, and G. L. Nemhauser. Facets of the Complementarity Knapsack Polytope. Mathematics of Operations Research, 27(1):210-226, 2002 Best regards, Eunseok Kim • Hi, 1) The log shows: Presolved model has 9443 SOS constraint(s) This implies that all SOS constraints are not reformulated into their binary forms. The 9443 SOS constraints will not be part of the LP root relaxation and will be addressed using branching 2) You need to do some experimentation to find out what the appropriate M value for your problem is. As the values of these parameters increase, the likelihood of reformulating all the SOS constraints into their binary forms increase. It is also important to make sure you define the tightest possible finite bounds for all the variables participating in the general constraints. In case you are using the Gurobi Python API, you can write the presolved model into a file using the method Model.presolve() to find out the type of the SOS constraints in your model. The simple general constraints are usually translated into a MIP using SOS1 constraints and the function constraints are translated into a MIP using SOS2 constraints. 3) Cutting planes are an integral part of the Gurobi optimizer and there are various globally valid cuts implemented in Gurobi. If you search for the word “Cuts” in the Gurobi Parameter Descriptions page, you will see the various cuts implemented in Gurobi. However, as mentioned, the SOS constraints that remain in the presolved model are addressed using branching rules and are not part of the LP relaxation and cutting plane generation. Best regards, Please sign in to leave a comment.
{"url":"https://support.gurobi.com/hc/en-us/community/posts/14112534776849-How-to-handle-SOS-type-1-and-SOS-type-2-constraints","timestamp":"2024-11-15T00:02:40Z","content_type":"text/html","content_length":"48203","record_id":"<urn:uuid:916481eb-fb08-4db3-8d05-1a138e58ce82>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00410.warc.gz"}
Write Pythonic Code Like a Seasoned Developer Write Pythonic Code Like a Seasoned Developer Transcripts Chapter: Classes and Objects Lecture: Encapsulation and data hiding 0:01 Encapsulation and data hiding is a key building block for object-oriented design. And Python, being a very object-oriented language, 0:07 or at least having great support for object-oriented programming, of course, has ways to do this. 0:13 But I would say it's less Pythonic to focus heavily on these ideas, 0:16 things like private variables, and whatnot with inside classes or protected variables that only derived classes can see, things like this. 0:25 But let's look at it anyway. So, over here we have a class called PetSnake, and you can give it a name and you can give it an age, 0:31 and you can supply this protected value but it builds it up as you create it and possibly these changes over time, right, 0:38 this is a great simplification of anything you might really use. And, like before, we have this string overwrite, 0:43 where we can print out some information about this. So here we can say "Here is my pet snake:", I want her called Slide, it's 6 years old, 0:50 and we can just print it out, that will call this method, we can also access its values directly here, so let's just run this. 0:57 Great, here is my pet snake, age 6, looks like it has an age and a name backwards but that's fine, and the protection level here, perfect. 1:04 There is nothing wrong with this class, it seems fine to me, but what if we wanted the age and the name to be read-only? 1:11 Once you have created a snake, you can't change its name, once you've created a snake, you can't change its age, 1:16 other than possibly having some way to like give it a birthday or something like that. 1:20 First of all, let me switch these, because this is kind of bugging me, this is backwards, 1:24 so in Python, there is a way to do this and let's work with this protected value, 1:29 let's suppose that we would like this to be accessible to derive classes but we want to indicate to the consumers of it, 1:35 "hey you probably shouldn't be messing with it", so let's just, before we change it, let's print it out over here, 1:40 so here we'll print it out and see everything is fine, if you look at the warnings form PyCharm, no warnings. 1:45 So the way that you indicate something should not be consumed outside of the class or more generally outside modules, 1:51 sort of externally is to say "_" as the prefix. So now if I say this, notice, this goes away, 1:57 obviously, because it doesn't exist, but we can put it back, that's fine, this goes away because it doesn't exist, we can put it back, 2:04 but now we have this warning and PyCharm is saying: "Access to a protected member such and such of class 2:10 you probably shouldn't be doing this unless you know what you are doing." However, this is just a warning, it still works. 2:17 Notice here we are reading the name and the age but we just as well could, and we are going to say "py.name = py.name.upper()", something like that, 2:27 so now we have Slide, so we are actually changing the type, OK and maybe I'll change this print statement order as well 2:33 here we go, SLIDE and SLIDE, capital. So what if we don't want this to be possible, we want read-only access to this and we'd have to provide a way 2:40 to get to it which, we'll get to later. So the way you do that in Python is you use double underscores, and of course down here, 2:48 those names changed, let me put this back for just a second, if we really want to make this change we can hit Ctrl+T 2:54 and do double underscore and change it everywhere Ctrl+T to save me some typing and be safer, of course. 3:01 You can see it changes everywhere but down here PyCharm is like "not so sure this is going to work well for you", 3:06 unresolved reference, well, maybe it's just hiding from us, maybe it's saying you know, you really shouldn't access this, 3:12 we are going to tell you that it's not there. If I say "py." and the only reason it thinks the name is there is because we are doing this line, 3:18 if I take this line away, there is no name, and it thought that line was creating the thing called __name which it would have, 3:24 if we set it, you can see those don't show up. OK, so now let's run it and see what happens. Boom, PetSnake has no thing called __name 3:34 and yet if I hide this, it does seem to have __name, so what is going on here? So you really can't access it by name here, 3:43 so let's look, so we'll look inside of type and say "what methods and fields does it have?" with this thing called dir, so I can "dir(py)" 3:51 and ask: "What basically features do you have?" It'll show us all the various things here, 3:57 so if we come over and we look for, here is our protected value, let's go and add just one normal value, 4:04 so I'll just say "self.normal = True", so here at the end is our normal value, 4:09 here is our protected one so we can't get to it, we are just told "you probably shouldn't". 4:13 So here we are saying "self.__age", "self.__name" and that seems to work, 4:18 but it's actually got this rewritten name, where it's rewritten based on the type. 4:24 So technically, you could come over here and copy this out and access this, and it certainly wouldn't look like what's written up here 4:31 and it would tell you "you know, you probably should stay away from that." This is how you do private fields within classes in Python, 4:37 here is how you do protected ones. And of course, doing neither of those, makes it just a normal type. OK, so here is our PetSnake in a graphic. 4:46 We saw if we wanted the age and the name to be completely private, we use double underscores, 4:52 if we want to have a protected variable that we want to strongly encourage people 4:55 to stay away from, we use single underscores and you saw that we get warnings on the linters and things like that, 5:01 so if we go and write some code that tries to access this type, here we can see we are creating a PetSnake called a py 5:07 we'll get to this property thing in a moment, if we want to say "py._protected", we can, but that does give us a warning, if we try to say "py.__age", 5:19 we saw that it crashes and basically that name doesn't exist, it's technically rewritten to be kind of hidden but normal access doesn't work for it.
{"url":"https://training.talkpython.fm/courses/transcript/write-pythonic-code-like-a-seasoned-developer/lecture/60802","timestamp":"2024-11-07T16:42:28Z","content_type":"text/html","content_length":"39701","record_id":"<urn:uuid:5cae747f-c18d-4d1b-ae8c-7eb63e6f5218>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00546.warc.gz"}
Home | David Nicholas Every now and again I find myself getting engrossed with certain ideas or tid bits of information. When that happens, I like to build something that allows me to explore the concept further. My current obsession is Jon Conway's Game of Life. At some point, I came across a Veritasim video that explores Math's incompleteness and undecidibility. Although not the main focus of the video, John Conway's Game of Life and its Turing completeness were the highlights. They pulled off a classic "Turing complete move" — demonstrating Turing completeness using the mechanism itself. In other words, they showcased the Game of Life being powered by the Game of Life. I had always found the Game of Life intriguing, but after seeing that, I knew I had to build it. Nothing as complex as the Game of Life powering itself, just a single random game along with some of the well-known configurations and structures. The Game of Life The Game of Life, developed by John Conway in the 1970s, is a cellular automaton — a mathematical model that simulates complex processes using a grid of cells governed by simple rules. Conway's Game of Life, as the name suggests, mimics life, where cells on the grid either live or die based on four key rules: 1. Any live cell with fewer than two live neighbors dies. 2. Any live cell with two or three live neighbors lives on to the next generation. 3. Any live cell with more than three live neighbors dies. 4. Any dead cell with exactly three live neighbors becomes a live cell. These foundational rules give rise to a surprisingly complex universe within Conway's Game of Life. Amidst this complexity, researchers have identified and categorized several popular structures, each exhibiting well-defined behaviors. Some interesting ones are: • Still Lifes - Stable structures that don't change • Oscillators - Structures that return to their starting point after some period • Spaceships - Structures that travel across the grid These structures highlight the diversity of Conway's Game of Life and set the stage for building our own simulator to experiment with these behaviors firsthand. Let's get building. The Build Building the Game of Life is a fairly straightforward endeavor. There are four key components we need to address: 1. The Grid / Game State 2. The Game Logic 3. Rendering the Game The first two can (and should) be done without relying on a framework or library like Angular or React. The fourth is a matter of personal preference — in this post, we'll use React. The Grid Conway's Game of Life is traditionally played on an infinite grid, which would be interesting to implement. However, to keep things simple, we will use a finite grid and apply wrapping. There are several ways to wrap a grid, including: • Toroidal Grids - A toroidal grid stitches together the left and right edges as well as the top and bottom edges. • Cylindrical Grids - A cylindrical grid wraps only one pair of opposite edges, typically the vertical edges, forming a cylinder. • Möbius Strip Grids - A Möbius strip grid has one pair of opposite edges that wrap around with a twist, creating a non-orientable surface. There are a couple of other grid types as well, and while it would be fun to get into the specifics of each, for the sake of this post, we'll be implementing a toroidal grid. In my opinion, this is the one that makes the most sense, making our implementation more straightforward. Just like the several ways to implement a grid wrap, there are various ways to actually implement our grid. Our cells need to store whether or not they are alive. With this in mind, there are a few popular approaches we can use, including: 1. A multi-dimensional array 2. A one-dimensional array 3. A binary Number Each of these has its own pros and cons. The multi-dimensional array closely matches our mental grid model, but creating and maintaining a multi-dimensional array can be complex. A one-dimensional array is easier to create and maintain but requires more effort to figure out neighboring cells, increasing the complexity of our game logic. A binary number is arguably easier to create and maintain, but it also has pronounced issues similar to those of a one-dimensional array regarding calculating neighbors. Something else that might influence your choice of grid implementation is what information you want to store about each cell. Currently, we're just focusing on whether the cell is alive or dead. However, if you want to store additional information, such as how many generations a cell has been alive, some of these options become less viable. It's been a while since I've worked with binary, so for this post, I'm going with the binary number implementation. As mentioned above, this means our game logic will be more complex since most of it will involve bit twiddling. Using a binary number also requires access to very large numbers. In JavaScript/TypeScript, the primitive number type only goes up to 2^31 - 1, which means we will be using a more recently added primitive: bigint. The main benefit of using a binary number for the grid is the implmentation is a bigint which means there actually isn't any implmentation to discuss here beyond declaring a variable, however, since we are going to be moving between one and two dimensions and using a torodial grid, we will need a couple helper functions which will help out our game logic implmentation. Converting Bit Position to Grid Coordinates We need to know the size of the grid we're creating to convert a binary number to a grid. A board of size N x M requires a binary number that is N x M bits long. To simplify, I'll make my grid square, so M and N will be the same. To avoid confusion, let's call it gridLength instead of N The basic idea for converting anything one dimensional to two dimension is to break it up into slices of size gridLength and stack them on top of each other. To calculate "x" coordinate of our bit we can modulo the position of the bit with the gridLength. This will tell us how far over the given cell is. This works because the modulo operation effectively resets the count after reaching the end of each row, aligning the bit positions with their respective "x" coordinates in the grid. export const getX = (position: bigint, gridLength: bigint): bigint => { return position % gridLength; To figure out our "y" coordinate, rather than using modulo, we can use integer division to count how many "stacks" of the slices we are in. This works because integer division effectively counts the number of complete rows above the current cell, aligning the bit positions with their respective "y" coordinates in the grid. export const getY = (position: bigint, gridLength: bigint): bigint => { return position / gridLength; Last thing we'll need is a way to get back. We can recalculate the position using some math. export const toPosition = ( x: bitint, y: bitint, sideLength: bitint ): bigint => { return y * sideLength + x; Getting Neighbor Bit Positions The Game of Life's rules require us to know how many of any given cell's neighbors are alive. Since we want the grid to be toroidal, there will be some extra complexity. If we were to ignore the edges of the grid, a cell's neighbors can be calculated by adding and subtracting one from the cell's x and y coordinates. If you want to be "math-y" we can describe the neighbors of a cell as the product of two sets dx and dy where dx and dy aren't both 0. dx = {-1, 0, 1} dy = {-1, 0, 1} neighbors = {(dx x dy) | dxi ∈ dx, dyj ∈ dy ,dxi,dyj != 0} If you don't want to be "math-y", you can be "program-y" and calculate the neighbors given a position with the following function. export const getNeighborPositionNoWrap = ( x: bigint, y: bigint, dx: bigint, dy: bigint, gridLength: bigint ) => { const nx = x + dx; const ny = y + dy; return toPosition(nx, ny, gridLength); Now we can address the elephant in the room — handling the wrap. Luckily, this is more straightforward than it initially sounds. We'll take our current nx/ny, add the gridLength, and modulo by the gridLength. This will wrap our edges around. The wrap function works the same for both nx and ny because the grid is square. If the grid were not square, the wrap function for ny would need to use the other side's length. export const wrap = (value: bigint, gridLength: bigint): bigint => { return (value + gridLength) % gridLength; export const getNeighborPosition = ( x: bitint, y: bitint, dx: bitint, dy: bitint, gridLength: bitint ) => { const nx = wrap(x + dx, gridLength); const ny = wrap(y + dy, gridLength); return toPosition(nx, ny, boardSideLength); Putting this all together, we've created our grid! you can see how the toroidal grid wrapping works by moving your mouse around over the grid, or by tapping on a cell if you're on your phone. Everything we've discussed in this section isn't unique to the "binary implmentation" of a grid. It applies for any array implementation as well! The Game Logic The game logic itself will be a function that accepts the current game state and returns the next. To do this, we'll need to walk through each cell and... 1. Determine whether or not the cell is alive 2. Count the number of live neighbors 3. Apply the rules of the game which will potentially change the cell's state Starting with the easy bit, to walk through the cell state, we'll need to create a loop that goes from 0 to gridLength * gridLength. const playGame = (cells: bigint, gridLength: bigint): bigint => { let newCells = cells; for ( let bitPosition = 0n; bitPosition < gridLength * gridLength; ) { * Determine whether or not the cell is alive * Count the number of live neighbors * Apply the rules of the game which will potentially * change the cell's state A Crash Course in Bit Twiddling Getting the cell state introduces the first bit of bit twiddling we'll use. If you are unfamilar with bit twiddling, heres a crash course. Bit twiddling is the intentional manipulation of individual bits (1s and 0s of a binary number). There are six fundamental operators most languages provide for developers to manipulate bits. 1. Bitwise AND (&) - Gives you the "and" of the bits provided. You can think of this as the logical &&, it works the same way. Notably, the identity of & is 1 meaning x & 1 is x (in math terms the identity of & is 1). 2. Bitwise OR (|) - Gives you the "or" of the bits provided. Like & you can think of this as the logical || and notably x | 1 is always 1 (or 1 is |'s annihilator). 3. Bitwise Negation (~) - Gives you the negation of whatever succeeds it - think of ! with logical operators. We wont be using this in this project due to two's complement reasons I don't want to get into. 4. Bitwise XOR (^) - Gives you a 1 if bits are different and 0 otherwise. 5. Left Shift (<<) - Shifts the lefthand side value to the left by the right hand side amount. 6. Right Shift (>>) - Shifts the lefthand side value to the right by the right hand side amount. A friend of mine once said, a good rule of thumb when working with bits is to "read" with &, "write" with |, and toggle with ^, which spoilers, is exactly how we will be using them. Back to Game State Ok so where were we? Right - game state. Let's start with determining if a cell is alive. A cell is alive if the bit at the cell's position is a 1. It's dead if its a 0. We can determine the state of the cell by shifting the grid over by the bit position (grid >> bitPosition) and &-ing (and-ing) it with 1. The shifting will give us either a 0 or a 1 , and the & tells us if its a 1 or not. Again, this works because 1 is the identity of of & meaning that x & 1 is x. If you don't like thinking about bits/ones and zeros the same applies for booleans and && where x && true is x. const getCellState = (grid: bigint, bitPosition: bigint): bigint => { return (grid >> bitPosition) & 1n; // ... const isAlive = getCellState(grid, bitPosition) === 1n; Next, we need to count the number of live neighbors. We already have most of the tools needed to accomplish this task. What we now need to do is loop over all combinations of dx and dy and apply our getNeighborPosition function to get the position of each of the neighbors. export const getNeighborPositions = ( bitPosition: bigint, gridLength: bigint ): bigint[] => { const neighborPositions = []; const dx = [-1n, 0n, 1n]; const dy = [-1n, 0n, 1n]; const x = getX(bitPosition, gridLength); const y = getY(bitPosition, gridLength); for (let i = 0; i < dx.length; i++) { for (let j = 0; j < dy.length; j++) { if (dx[i] === 0n && dy[j] === 0n) { // skip the current cell const neighborBitPosition = getNeighborPosition( return neighborPositions; From there, we just need to count the number of live neighbors using the getCellState function. export const countLiveNeighbors = ( cells: bigint, bitPosition: bigint, gridLength: bigint ): number => { return getNeighborPositions(bitPosition, gridLength).reduce( (liveNeighbors, position) => { const toAdd = getCellState(cells, position) === 1n ? 1 : 0; return liveNeighbors + toAdd; Now all we need to do is apply the rules of the game to our binary number. Since it's been a while, let's refresh ourselves on the rules: 1. Any live cell with fewer than two live neighbors dies. 2. Any live cell with two or three live neighbors lives on to the next generation. 3. Any live cell with more than three live neighbors dies. 4. Any dead cell with exactly three live neighbors becomes a live cell. To make our bit logic simpler, we can also think about this as: • if the cell is alive and has less than 2 or more than 3 alive neighbors, kill it. • if the cell is dead and has exactly 3 neighbors, bring it to life. Both of these could be written with ^ (xor) to toggle the current position, but for the sake of doing more twiddling techniques, we'll use | (or) to bring the cell back to life. The general approach is the same for both: shift and apply the operator. All together our playGame logic looks like this. export const playGame = (cells: bigint, gridLength: bigint): bigint => { let newCells = cells; for ( let bitPosition = 0n; bitPosition < gridLength * gridLength; ) { const neighbors = countLiveNeighbors(cells, bitPosition, gridLength); const isAlive = getCellState(cells, bitPosition) === 1n; if (isAlive) { if (neighbors < 2 || neighbors > 3) { newCells ^= 1n << bitPosition; } else { if (neighbors === 3) { newCells |= 1n << bitPosition; return newCells; With our game logic in place and the rules of Conway's Game of Life applied, it's time to bring our grid to life visually. Let's move on to rendering. This is both the most important and yet most boring section of this build. Without rendering, the game can't be seen - however to actually render it is a fairly trival matter. As previously mentioned, we'll be using react to render this out, so we will need to put our grid into react state; and to lay the game run the playGame function on our cells updating state afterwards. const GameOfLife = () => { const [grid, setGrid] = useState(initialGrid); const step = () => { setGrid((previousGrid) => playGame(previousGrid)); // ... rest of component Now that our game logic is complete and the rules can be applied, we just need to render the grid visually. For each cell, we'll check if it's alive, and if so, we'll change its background color. The same looping approach from the game logic will work here, iterating over gridLength * gridLength. To render it as a grid instead of a list, we'll leverage CSS by setting display: grid; and specifying the number of columns using the gridLength. interface GameOfLifeProps { gridLength: bigint; initialGrid: bigint; const GameOfLife = ({ gridLength, initialGrid }: GameOfLifeProps) => { const [grid, setGrid] = useState(initialGrid); const renderable = useMemo( () => [...Array.from({ length: gridLength * gridLength }).keys()].map(BigInt), const step = () => { setGrid((previousGrid) => playGame(previousGrid, gridLength)); return ( display: "grid", gridTemplateColumns: `repeat(${gridLength}, 1fr)`, width: "500px", height: "500px", {renderable.map((position, index) => ( getCellState(position, gridLength) === 1n ? "green" : "transparent", That's really all there is to it; by applying everything we've discussed, and we get a working game. There are a bunch of different ways you can create a grid to render from creating a random binary number to loading pre-created binary. The original reason I wanted to build this out was to explore some of the well known structures that a game can create -- so I'm going to use the latter. Looking at Presets There are three main structures that appear commonly in the game: • Still Lifes - Stable structures that don't change • Oscillators - Structures that return to their starting point after some period • Spaceships - Structures that travel across the grid Still Lifes are pretty mundane looking, and although its interesting that there are stable patterns that arise within the chaos of a game like the game of life, I didn't build a simulator to see things not moveBy, so we'll omit this one. Oscillators are fascinating structures in the Game of Life. After several generations, they periodically return to their original state, creating a rhythmic pattern. Here are a few interesting A smaller oscillator with a period of 2. It consists of two blocks that "blink" in and out of existence, mimicking a beacon's signal. Another simple oscillator with a period of 2. It features a set of six cells that alternate between two distinct shapes, resembling a toad hopping back and forth (Full disclaimer, I'm not sure that's what inspired the name, but it makes sense in my head and I like it). One of the most well-known oscillators, the pulsar has a period of 3. It consists of a central block surrounded by three arms on each side that oscillate in sync. This one is my favorite, primarily because it looks like something out of space invaders. This oscillator has a period of 15 and resembles a repeating pattern of vertical lines. These oscillators showcase the intriguing periodic behaviors that can emerge from Conway's simple rules as well as demonstrate the predictable periodic behavior that can emerge from simple rules. Oscillators also provide valuable insights into the stability and repetition within chaotic systems, making them key elements in studying the complexity of cellular automata. Spaceships are another captivating category of structures in the Game of Life. Unlike oscillators, spaceships travel across the grid while maintaining shape, creating an illusion of movement. They highlight the potential for mobility and interaction in the Game of Life. They are particularly interesting for their ability to move through the grid, interact with other patterns, and demonstrate how information can propagate in cellular automata. Here are a couple of notable examples Lightweight Space Ship (LWSS) The LWSS is a small and fast-moving spaceship with a period of 4. It's known for its compact size and efficiency, making it a popular example in many simulations (like this one haha). The glider is one of the most famous spaceships in the Game of Life. It has a period of 4 and moves diagonally across the grid. The glider has become an iconic symbol of the Game of Life. These spaceships illustrate the fascinating possibilities of movement and interaction within the cellular automaton. Wrapping Things Up Conway's Game of Life is more than just a simple cellular automaton; it's a window into the rich and complex world of emergent behavior from simple rules. Through this journey, we've explored the fundamental concepts of the Game of Life, from setting up the grid and understanding bitwise operations to implementing game logic and rendering the results. We've also delved into the intriguing patterns that arise, such as oscillators and spaceships, each offering unique insights into the stability, repetition, and mobility within this system. Building a simulator for the Game of Life enhances our understanding of these mathematical phenomena. It provides a hands-on way to witness the beauty and complexity that can emerge from simplicity. I encourage you to experiment with different configurations and see what new patterns you can discover. The world of cellular automata is vast and full of surprises, and Conway's Game of Life is just the beginning. Happy simulating!
{"url":"https://davidnicholas.dev/blog/conways-game-of-life","timestamp":"2024-11-10T12:31:37Z","content_type":"text/html","content_length":"182503","record_id":"<urn:uuid:c43a550b-e95f-44b3-b99e-32389cd30129>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00118.warc.gz"}
Equivalence principle In the theory of general relativity, the equivalence principle is the equivalence of gravitational and inertial mass, and Albert Einstein's observation that the gravitational "force" as experienced locally while standing on a massive body (such as the Earth) is the same as the pseudo-force experienced by an observer in a non-inertial (accelerated) frame of reference. Einstein's statement of the equality of inertial and gravitational mass A little reflection will show that the law of the equality of the inertial and gravitational mass is equivalent to the assertion that the acceleration imparted to a body by a gravitational field is independent of the nature of the body. For Newton's equation of motion in a gravitational field, written out in full, it is: (Inertial mass) \( \cdot \) (Acceleration) = (Intensity of the gravitational field) \( \cdot \) (Gravitational mass). It is only when there is numerical equality between the inertial and gravitational mass that the acceleration is independent of the nature of the body.[1][2] Development of gravitational theory Main article: History of gravitational theory File:Apollo 15 feather and hammer drop.ogvPlay media During the Apollo 15 mission in 1971, astronaut David Scott showed that Galileo was right: acceleration is the same for all bodies subject to gravity on the Moon, even for a hammer and a feather. Something like the equivalence principle emerged in the early 17th century, when Galileo expressed experimentally that the acceleration of a test mass due to gravitation is independent of the amount of mass being accelerated. Johannes Kepler, using Galileo's discoveries, showed knowledge of the equivalence principle by accurately describing what would occur if the moon were stopped in its orbit and dropped towards Earth. This can be deduced without knowing if or in what manner gravity decreases with distance, but requires assuming the equivalency between gravity and inertia. If two stones were placed in any part of the world near each other, and beyond the sphere of influence of a third cognate body, these stones, like two magnetic needles, would come together in the intermediate point, each approaching the other by a space proportional to the comparative mass of the other. If the moon and earth were not retained in their orbits by their animal force or some other equivalent, the earth would mount to the moon by a fifty-fourth part of their distance, and the moon fall towards the earth through the other fifty-three parts, and they would there meet, assuming, however, that the substance of both is of the same density. —Johannes Kepler, "Astronomia Nova", 1609[3] The 1/54 ratio is Kepler's estimate of the Moon–Earth mass ratio, based on their diameters. The accuracy of his statement can be deduced by using Newton's inertia law F=ma and Galileo's gravitational observation that distance D \( {\displaystyle D=(1/2)at^{2}} \). Setting these accelerations equal for a mass is the equivalence principle. Noting the time to collision for each mass is the same gives Kepler's statement that Dmoon/DEarth=MEarth/Mmoon, without knowing the time to collision or how or if the acceleration force from gravity is a function of distance. Newton's gravitational theory simplified and formalized Galileo's and Kepler's ideas by recognizing Kepler's "animal force or some other equivalent" beyond gravity and inertia were not needed, deducing from Kepler's planetary laws how gravity reduces with distance. The equivalence principle was properly introduced by Albert Einstein in 1907, when he observed that the acceleration of bodies towards the center of the Earth at a rate of 1g (g = 9.81 m/s2 being a standard reference of gravitational acceleration at the Earth's surface) is equivalent to the acceleration of an inertially moving body that would be observed on a rocket in free space being accelerated at a rate of 1g. Einstein stated it thus: we ... assume the complete physical equivalence of a gravitational field and a corresponding acceleration of the reference system. —Einstein, 1907 That is, being on the surface of the Earth is equivalent to being inside a spaceship (far from any sources of gravity) that is being accelerated by its engines. The direction or vector of acceleration equivalence on the surface of the earth is "up" or directly opposite the center of the planet while the vector of acceleration in a spaceship is directly opposite from the mass ejected by its thrusters. From this principle, Einstein deduced that free-fall is inertial motion. Objects in free-fall do not experience being accelerated downward (e.g. toward the earth or other massive body) but rather weightlessness and no acceleration. In an inertial frame of reference bodies (and photons, or light) obey Newton's first law, moving at constant velocity in straight lines. Analogously, in a curved spacetime the world line of an inertial particle or pulse of light is as straight as possible (in space and time).[4] Such a world line is called a geodesic and from the point of view of the inertial frame is a straight line. This is why an accelerometer in free-fall doesn't register any acceleration; there isn't any between the internal test mass and the accelerometer's body. As an example: an inertial body moving along a geodesic through space can be trapped into an orbit around a large gravitational mass without ever experiencing acceleration. This is possible because spacetime is radically curved in close vicinity to a large gravitational mass. In such a situation the geodesic lines bend inward around the center of the mass and a free-floating (weightless) inertial body will simply follow those curved geodesics into an elliptical orbit. An accelerometer on-board would never record any acceleration. By contrast, in Newtonian mechanics, gravity is assumed to be a force. This force draws objects having mass towards the center of any massive body. At the Earth's surface, the force of gravity is counteracted by the mechanical (physical) resistance of the Earth's surface. So in Newtonian physics, a person at rest on the surface of a (non-rotating) massive object is in an inertial frame of reference. These considerations suggest the following corollary to the equivalence principle, which Einstein formulated precisely in 1911: Whenever an observer detects the local presence of a force that acts on all objects in direct proportion to the inertial mass of each object, that observer is in an accelerated frame of reference. Einstein also referred to two reference frames, K and K'. K is a uniform gravitational field, whereas K' has no gravitational field but is uniformly accelerated such that objects in the two frames experience identical forces: We arrive at a very satisfactory interpretation of this law of experience, if we assume that the systems K and K' are physically exactly equivalent, that is, if we assume that we may just as well regard the system K as being in a space free from gravitational fields, if we then regard K as uniformly accelerated. This assumption of exact physical equivalence makes it impossible for us to speak of the absolute acceleration of the system of reference, just as the usual theory of relativity forbids us to talk of the absolute velocity of a system; and it makes the equal falling of all bodies in a gravitational field seem a matter of course. —Einstein, 1911 This observation was the start of a process that culminated in general relativity. Einstein suggested that it should be elevated to the status of a general principle, which he called the "principle of equivalence" when constructing his theory of relativity: As long as we restrict ourselves to purely mechanical processes in the realm where Newton's mechanics holds sway, we are certain of the equivalence of the systems K and K'. But this view of ours will not have any deeper significance unless the systems K and K' are equivalent with respect to all physical processes, that is, unless the laws of nature with respect to K are in entire agreement with those with respect to K'. By assuming this to be so, we arrive at a principle which, if it is really true, has great heuristic importance. For by theoretical consideration of processes which take place relatively to a system of reference with uniform acceleration, we obtain information as to the career of processes in a homogeneous gravitational field. —Einstein, 1911 Einstein combined (postulated) the equivalence principle with special relativity to predict that clocks run at different rates in a gravitational potential, and light rays bend in a gravitational field, even before he developed the concept of curved spacetime. So the original equivalence principle, as described by Einstein, concluded that free-fall and inertial motion were physically equivalent. This form of the equivalence principle can be stated as follows. An observer in a windowless room cannot distinguish between being on the surface of the Earth, and being in a spaceship in deep space accelerating at 1g. This is not strictly true, because massive bodies give rise to tidal effects (caused by variations in the strength and direction of the gravitational field) which are absent from an accelerating spaceship in deep space. The room, therefore, should be small enough that tidal effects can be neglected. Although the equivalence principle guided the development of general relativity, it is not a founding principle of relativity but rather a simple consequence of the geometrical nature of the theory. In general relativity, objects in free-fall follow geodesics of spacetime, and what we perceive as the force of gravity is instead a result of our being unable to follow those geodesics of spacetime, because the mechanical resistance of Earth's matter or surface prevents us from doing so. Since Einstein developed general relativity, there was a need to develop a framework to test the theory against other possible theories of gravity compatible with special relativity. This was developed by Robert Dicke as part of his program to test general relativity. Two new principles were suggested, the so-called Einstein equivalence principle and the strong equivalence principle, each of which assumes the weak equivalence principle as a starting point. They only differ in whether or not they apply to gravitational experiments. Another clarification needed is that the equivalence principle assumes a constant acceleration of 1g without considering the mechanics of generating 1g. If we do consider the mechanics of it, then we must assume the aforementioned windowless room has a fixed mass. Accelerating it at 1g means there is a constant force being applied, which = m*g where m is the mass of the windowless room along with its contents (including the observer). Now, if the observer jumps inside the room, an object lying freely on the floor will decrease in weight momentarily because the acceleration is going to decrease momentarily due to the observer pushing back against the floor in order to jump. The object will then gain weight while the observer is in the air and the resulting decreased mass of the windowless room allows greater acceleration; it will lose weight again when the observer lands and pushes once more against the floor; and it will finally return to its initial weight afterwards. To make all these effects equal those we would measure on a planet producing 1g, the windowless room must be assumed to have the same mass as that planet. Additionally, the windowless room must not cause its own gravity, otherwise the scenario changes even further. These are technicalities, clearly, but practical ones if we wish the experiment to demonstrate more or less precisely the equivalence of 1g gravity and 1g acceleration. Modern usage Three forms of the equivalence principle are in current use: weak (Galilean), Einsteinian, and strong. The weak equivalence principle The weak equivalence principle, also known as the universality of free fall or the Galilean equivalence principle can be stated in many ways. The strong EP, a generalization of the weak EP, includes astronomic bodies with gravitational self-binding energy[5] (e.g., 1.74 solar-mass pulsar PSR J1903+0327, 15.3% of whose separated mass is absent as gravitational binding energy[6][failed verification]). Instead, the weak EP assumes falling bodies are self-bound by non-gravitational forces only (e.g. a stone). Either way: The trajectory of a point mass in a gravitational field depends only on its initial position and velocity, and is independent of its composition and structure. All test particles at the alike spacetime point, in a given gravitational field, will undergo the same acceleration, independent of their properties, including their rest mass.[7] All local centers of mass free-fall (in vacuum) along identical (parallel-displaced, same speed) minimum action trajectories independent of all observable properties. The vacuum world-line of a body immersed in a gravitational field is independent of all observable properties. The local effects of motion in a curved spacetime (gravitation) are indistinguishable from those of an accelerated observer in flat spacetime, without exception. Mass (measured with a balance) and weight (measured with a scale) are locally in identical ratio for all bodies (the opening page to Newton's Philosophiæ Naturalis Principia Mathematica, 1687). Locality eliminates measurable tidal forces originating from a radial divergent gravitational field (e.g., the Earth) upon finite sized physical bodies. The "falling" equivalence principle embraces Galileo's, Newton's, and Einstein's conceptualization. The equivalence principle does not deny the existence of measurable effects caused by a rotating gravitating mass (frame dragging), or bear on the measurements of light deflection and gravitational time delay made by non-local observers. Active, passive, and inertial masses By definition of active and passive gravitational mass, the force on \( M_{1} \) due to the gravitational field of \( M_{0} \) is: \( F_1 = \frac{M_0^\mathrm{act} M_1^\mathrm{pass}}{r^2} \) Likewise the force on a second object of arbitrary mass2 due to the gravitational field of mass0 is: \( F_2 = \frac{M_0^\mathrm{act} M_2^\mathrm{pass}}{r^2} \) By definition of inertial mass: \( F = m^\mathrm{inert} a If \( m_{1} \) and \( m_{2} \) are the same distance r from \( m_{0} \) then, by the weak equivalence principle, they fall at the same rate (i.e. their accelerations are the same) \( a_1 = \frac{F_1}{m_1^\mathrm{inert}} = a_2 = \frac{F_2}{m_2^\mathrm{inert}} \) \( \frac{M_0^\mathrm{act} M_1^\mathrm{pass}}{r^2 m_1^\mathrm{inert}} = \frac{M_0^\mathrm{act} M_2^\mathrm{pass}}{r^2 m_2^\mathrm{inert}} \) \( \frac{M_1^\mathrm{pass}}{m_1^\mathrm{inert}} = \frac{M_2^\mathrm{pass}}{m_2^\mathrm{inert}} \) In other words, passive gravitational mass must be proportional to inertial mass for all objects. Furthermore, by Newton's third law of motion: \( F_1 = \frac{M_0^\mathrm{act} M_1^\mathrm{pass}}{r^2} \) must be equal and opposite to \( F_0 = \frac{M_1^\mathrm{act} M_0^\mathrm{pass}}{r^2} \) It follows that: \( \frac{M_0^\mathrm{act}}{M_0^\mathrm{pass}} = \frac{M_1^\mathrm{act}}{M_1^\mathrm{pass}} \) In other words, passive gravitational mass must be proportional to active gravitational mass for all objects. The dimensionless Eötvös-parameter η ( A , B ) {\displaystyle \eta (A,B)} \eta(A,B) is the difference of the ratios of gravitational and inertial masses divided by their average for the two sets of test masses "A" and "B." \( \eta(A,B)=2\frac{ \left(\frac{m_g}{m_i}\right)_A-\left(\frac{m_g}{m_i}\right)_B }{\left(\frac{m_g}{m_i}\right)_A+\left(\frac{m_g}{m_i}\right)_B} \) Tests of the weak equivalence principle Tests of the weak equivalence principle are those that verify the equivalence of gravitational mass and inertial mass. An obvious test is dropping different objects, ideally in a vacuum environment, e.g., inside the Fallturm Bremen drop tower. Researcher Year Method Result John Philoponus 6th Said that by observation, two balls of very different weights will fall at nearly the same speed no detectable difference Simon Stevin^[8] ~1586 Dropped lead balls of different masses off the Delft churchtower no detectable difference Galileo Galilei ~1610 Rolling balls of varying weight down inclined planes to slow the speed so that it was measurable no detectable difference Isaac Newton ~1680 Measure the period of pendulums of different mass but identical length difference is less than 1 part in 10^3 Friedrich 1832 Measure the period of pendulums of different mass but identical length no measurable difference Wilhelm Bessel Loránd Eötvös 1908 Measure the torsion on a wire, suspending a balance beam, between two nearly identical masses under the difference is 10±2 part in 10^9 (H[2]O/Cu)^[9] acceleration of gravity and the rotation of the Earth Roll, Krotkov 1964 Torsion balance experiment, dropping aluminum and gold test masses \( |\eta(\mathrm{Al},\mathrm{Au})|=(1.3\pm1.0)\times10^{-11} \)[10] and Dicke David Scott 1971 Dropped a falcon feather and a hammer at the same time on the Moon no detectable difference (not a rigorous experiment, but very dramatic being the first lunar one^[11]) Braginsky and 1971 Torsion balance, aluminum and platinum test masses, measuring acceleration towards the Sun difference is less than 1 part in 10^12 Eöt-Wash group 1987– Torsion balance, measuring acceleration of different masses towards the Earth, Sun and galactic center, \( \eta(\text{Earth},\text{Be-Ti})=(0.3 \pm 1.8)\times 10^{-13}\) [ using several different kinds of masses 12] Year Investigator Sensitivity Method 500? Philoponus^[14] "small" Drop tower 1585 Stevin^[15] 5×10^−2 Drop tower 1590? Galileo^[16] 2×10^−2 Pendulum, drop tower 1686 Newton^[17] 10^−3 Pendulum 1832 Bessel^[18] 2×10^−5 Pendulum 1908 (1922) Eötvös^[19] 2×10^−9 Torsion balance 1910 Southerns^[20] 5×10^−6 Pendulum 1918 Zeeman^[21] 3×10^−8 Torsion balance 1923 Potter^[22] 3×10^−6 Pendulum 1935 Renner^[23] 2×10^−9 Torsion balance 1964 Dicke, Roll, Krotkov^[10] 3x10^−11 Torsion balance 1972 Braginsky, Panov^[24] 10^−12 Torsion balance 1976 Shapiro, et al.^[25] 10^−12 Lunar laser ranging 1981 Keiser, Faller^[26] 4×10^−11 Fluid support 1987 Niebauer, et al.^[27] 10^−10 Drop tower 1989 Stubbs, et al.^[28] 10^−11 Torsion balance 1990 Adelberger, Eric G.; et al.^[29] 10^−12 Torsion balance 1999 Baessler, et al.^[30]^[31] 5x10^−14 Torsion balance 2017 MICROSCOPE^[32] 10^−15 Earth orbit Experiments are still being performed at the University of Washington which have placed limits on the differential acceleration of objects towards the Earth, the Sun and towards dark matter in the galactic center. Future satellite experiments[33] – STEP (Satellite Test of the Equivalence Principle), and Galileo Galilei – will test the weak equivalence principle in space, to much higher With the first successful production of antimatter, in particular anti-hydrogen, a new approach to test the weak equivalence principle has been proposed. Experiments to compare the gravitational behavior of matter and antimatter are currently being developed.[34] Proposals that may lead to a quantum theory of gravity such as string theory and loop quantum gravity predict violations of the weak equivalence principle because they contain many light scalar fields with long Compton wavelengths, which should generate fifth forces and variation of the fundamental constants. Heuristic arguments suggest that the magnitude of these equivalence principle violations could be in the 10^−13 to 10^−18 range.^[35] Currently envisioned tests of the weak equivalence principle are approaching a degree of sensitivity such that non-discovery of a violation would be just as profound a result as discovery of a violation. Non-discovery of equivalence principle violation in this range would suggest that gravity is so fundamentally different from other forces as to require a major reevaluation of current attempts to unify gravity with the other forces of nature. A positive detection, on the other hand, would provide a major guidepost towards unification.^[35]The Einstein equivalence principle What is now called the "Einstein equivalence principle" states that the weak equivalence principle holds, and that:[36] The outcome of any local non-gravitational experiment in a freely falling laboratory is independent of the velocity of the laboratory and its location in spacetime. Here "local" has a very special meaning: not only must the experiment not look outside the laboratory, but it must also be small compared to variations in the gravitational field, tidal forces, so that the entire laboratory is freely falling. It also implies the absence of interactions with "external" fields other than the gravitational field. The principle of relativity implies that the outcome of local experiments must be independent of the velocity of the apparatus, so the most important consequence of this principle is the Copernican idea that dimensionless physical values such as the fine-structure constant and electron-to-proton mass ratio must not depend on where in space or time we measure them. Many physicists believe that any Lorentz invariant theory that satisfies the weak equivalence principle also satisfies the Einstein equivalence principle. Schiff's conjecture suggests that the weak equivalence principle implies the Einstein equivalence principle, but it has not been proven. Nonetheless, the two principles are tested with very different kinds of experiments. The Einstein equivalence principle has been criticized as imprecise, because there is no universally accepted way to distinguish gravitational from non-gravitational experiments (see for instance Hadley[37] and Durand[38]). Tests of the Einstein equivalence principle In addition to the tests of the weak equivalence principle, the Einstein equivalence principle can be tested by searching for variation of dimensionless constants and mass ratios. The present best limits on the variation of the fundamental constants have mainly been set by studying the naturally occurring Oklo natural nuclear fission reactor, where nuclear reactions similar to ones we observe today have been shown to have occurred underground approximately two billion years ago. These reactions are extremely sensitive to the values of the fundamental constants. Constant Year Method Limit on fractional change proton gyromagnetic factor 1976 astrophysical 10^−1 weak interaction constant 1976 Oklo 10^−2 fine structure constant 1976 Oklo 10^−7 electron–proton mass ratio 2002 quasars 10^−4 There have been a number of controversial attempts to constrain the variation of the strong interaction constant. There have been several suggestions that "constants" do vary on cosmological scales. The best known is the reported detection of variation (at the 10−5 level) of the fine-structure constant from measurements of distant quasars, see Webb et al.[39] Other researchers dispute these findings. Other tests of the Einstein equivalence principle are gravitational redshift experiments, such as the Pound–Rebka experiment which test the position independence of experiments. The strong equivalence principle The strong equivalence principle suggests the laws of gravitation are independent of velocity and location. In particular, The gravitational motion of a small test body depends only on its initial position in spacetime and velocity, and not on its constitution. The outcome of any local experiment (gravitational or not) in a freely falling laboratory is independent of the velocity of the laboratory and its location in spacetime. The first part is a version of the weak equivalence principle that applies to objects that exert a gravitational force on themselves, such as stars, planets, black holes or Cavendish experiments. The second part is the Einstein equivalence principle (with the same definition of "local"), restated to allow gravitational experiments and self-gravitating bodies. The freely-falling object or laboratory, however, must still be small, so that tidal forces may be neglected (hence "local experiment"). This is the only form of the equivalence principle that applies to self-gravitating objects (such as stars), which have substantial internal gravitational interactions. It requires that the gravitational constant be the same everywhere in the universe and is incompatible with a fifth force. It is much more restrictive than the Einstein equivalence principle. The strong equivalence principle suggests that gravity is entirely geometrical by nature (that is, the metric alone determines the effect of gravity) and does not have any extra fields associated with it. If an observer measures a patch of space to be flat, then the strong equivalence principle suggests that it is absolutely equivalent to any other patch of flat space elsewhere in the universe. Einstein's theory of general relativity (including the cosmological constant) is thought to be the only theory of gravity that satisfies the strong equivalence principle. A number of alternative theories, such as Brans–Dicke theory, satisfy only the Einstein equivalence principle. Tests of the strong equivalence principle The strong equivalence principle can be tested by searching for a variation of Newton's gravitational constant G over the life of the universe, or equivalently, variation in the masses of the fundamental particles. A number of independent constraints, from orbits in the solar system and studies of Big Bang nucleosynthesis have shown that G cannot have varied by more than 10%. Thus, the strong equivalence principle can be tested by searching for fifth forces (deviations from the gravitational force-law predicted by general relativity). These experiments typically look for failures of the inverse-square law (specifically Yukawa forces or failures of Birkhoff's theorem) behavior of gravity in the laboratory. The most accurate tests over short distances have been performed by the Eöt–Wash group. A future satellite experiment, SEE (Satellite Energy Exchange), will search for fifth forces in space and should be able to further constrain violations of the strong equivalence principle. Other limits, looking for much longer-range forces, have been placed by searching for the Nordtvedt effect, a "polarization" of solar system orbits that would be caused by gravitational self-energy accelerating at a different rate from normal matter. This effect has been sensitively tested by the Lunar Laser Ranging Experiment. Other tests include studying the deflection of radiation from distant radio sources by the sun, which can be accurately measured by very long baseline interferometry. Another sensitive test comes from measurements of the frequency shift of signals to and from the Cassini spacecraft. Together, these measurements have put tight limits on Brans–Dicke theory and other alternative theories of gravity. In 2014, astronomers discovered a stellar triple system including a millisecond pulsar PSR J0337+1715 and two white dwarfs orbiting it. The system provided them a chance to test the strong equivalence principle in a strong gravitational field with high accuracy.[40][41][42] In 2020, a group of astronomers analyzing data from from the Spitzer Photometry and Accurate Rotation Curves (SPARC) sample together with estimates of the large-scale external gravitational field from an all-sky galaxy catalog, concluded that there was highly statistically significant evidence of violations of the strong equivalence principle in weak gravitational fields in the vicinity of rotationally supported galaxies.[43] They observed an effect consistent with the external field effect of Modified Newtonian dynamics (MOND), a toy model modified gravity theory beyond General Relativity, and inconsistent with tidal effects in the Lambda-CDM model paradigm commonly known as the Standard Model of Cosmology. One challenge to the equivalence principle is the Brans–Dicke theory. Self-creation cosmology is a modification of the Brans–Dicke theory. The Fredkin Finite Nature Hypothesis is an even more radical challenge to the equivalence principle and has even fewer supporters. In August 2010, researchers from the University of New South Wales, Swinburne University of Technology, and Cambridge University published a paper titled "Evidence for spatial variation of the fine structure constant", whose tentative conclusion is that, "qualitatively, [the] results suggest a violation of the Einstein Equivalence Principle, and could infer a very large or infinite universe, within which our 'local' Hubble volume represents a tiny fraction."[44] Dutch physicist and string theorist Erik Verlinde has generated a self-contained, logical derivation of the equivalence principle based on the starting assumption of a holographic universe. Given this situation, gravity would not be a true fundamental force as is currently thought but instead an "emergent property" related to entropy. Verlinde's entropic gravity theory apparently leads naturally to the correct observed strength of dark energy; previous failures to explain its incredibly small magnitude have been called by such people as cosmologist Michael Turner (who is credited as having coined the term "dark energy") as "the greatest embarrassment in the history of theoretical physics".[45] These ideas are far from settled and still very controversial. University of Washington[46] Lunar Laser Ranging[47][48] Galileo-Galilei satellite experiment[49] Satellite Test of the Equivalence Principle (STEP)[50] Satellite Energy Exchange (SEE)[52] "...Physicists in Germany have used an atomic interferometer to perform the most accurate ever test of the equivalence principle at the level of atoms..."[53] See also Classical mechanics Einstein's thought experiments Equivalence principle (geometric) Gauge gravitation theory General covariance Mach's principle Tests of general relativity Unsolved problems in astronomy Unsolved problems in physics Einstein, Albert, How I Constructed the Theory of Relativity, translated by Masahiro Morikawa from the text recorded in Japanese by Jun Ishiwara, Association of Asia Pacific Physical Societies (AAPPS) Bulletin, Vol. 15, No. 2, pp. 17–19, April 2005. Einstein recalls events of 1907 in a talk in Japan on 14 December 1922. Einstein, Albert (2003). The Meaning of Relativity. Routledge. p. 59. ISBN 9781134449798. Macdonald, Alan (15 September 2012). "General Relativity in a Nutshell" (PDF). Luther College. p. 32. Wagner, Todd A.; Schlamminger, Stephan; Gundlach, Jens H.; Adelberger, Eric G. (2012). "Torsion-balance tests of the weak equivalence principle". Classical and Quantum Gravity. 29 (18): 184002.arXiv :1207.2442. Bibcode:2012CQGra..29r4002W. doi:10.1088/0264-9381/29/18/184002. S2CID 59141292. Champion, David J.; Ransom, Scott M.; Lazarus, Patrick; Camilo, Fernando; et al. (2008). ""; "Title". Science. 320 (5881): 1309–12.arXiv:0805.2396. doi:10.1126/science.1157580. PMID 18483399. S2CID Wesson, Paul S. (2006). Five-dimensional Physics. World Scientific. p. 82. ISBN 978-981-256-661-4. Devreese, Jozef T.; Vanden Berghe, Guido (2008). 'Magic Is No Magic': The Wonderful World of Simon Stevin. p. 154. ISBN 9781845643911. Eötvös, Loránd; Annalen der Physik (Leipzig) 68 11 (1922) Roll, Peter G.; Krotkov, Robert; Dicke, Robert H.; The equivalence of inertial and passive gravitational mass, Annals of Physics, Volume 26, Issue 3, 20 February 1964, pp. 442–517 "Weak Equivalence Principle test on the moon". Schlamminger, Stephan; Choi, Ki-Young; Wagner, Todd A.; Gundlach, Jens H.; Adelberger, Eric G. (2008). "Test of the Equivalence Principle Using a Rotating Torsion Balance". Physical Review Letters. 100 (4): 041101.arXiv:0712.0607. Bibcode:2008PhRvL.100d1101S. doi:10.1103/PhysRevLett.100.041101. PMID 18352252. S2CID 18653407. Ciufolini, Ignazio; Wheeler, John A.; "Gravitation and Inertia", Princeton, New Jersey: Princeton University Press, 1995, pp. 117–119 Philoponus, John; "Corollaries on Place and Void", translated by David Furley, Ithaca, New York: Cornell University Press, 1987 Stevin, Simon; De Beghinselen der Weeghconst ["Principles of the Art of Weighing"], Leyden, 1586; Dijksterhuis, Eduard J.; "The Principal Works of Simon Stevin", Amsterdam, 1955 Galilei, Galileo; "Discorsi e Dimostrazioni Matematiche Intorno a Due Nuove Scienze", Leida: Appresso gli Elsevirii, 1638; "Discourses and Mathematical Demonstrations Concerning Two New Sciences", Leiden: Elsevier Press, 1638 Newton, Isaac; "Philosophiae Naturalis Principia Mathematica" [Mathematical Principles of Natural Philosophy and his System of the World], translated by Andrew Motte, revised by Florian Cajori, Berkeley, California: University of California Press, 1934; Newton, Isaac; "The Principia: Mathematical Principles of Natural Philosophy", translated by I. Bernard Cohen and Anne Whitman, with the assistance of Julia Budenz, Berkeley, California: University of California Press, 1999 Bessel, Friedrich W.; "Versuche Uber die Kraft, mit welcher die Erde Körper von verschiedner Beschaffenhelt anzieht", Annalen der Physik und Chemie, Berlin: J. Ch. Poggendorff, 25 401–408 (1832) R. v. Eötvös 1890 Mathematische und Naturwissenschaftliche Berichte aus Ungarn, 8, 65; Annalen der Physik (Leipzig) 68 11 (1922); Smith, G. L.; Hoyle, C. D.; Gundlach, J. H.; Adelberger, E. G.; Heckel, B. R.; Swanson, H. E. (1999). "Short-range tests of the equivalence principle". Physical Review D. 61 (2). doi:10.1103/PhysRevD.61.022001. Southerns, Leonard (1910). "A Determination of the Ratio of Mass to Weight for a Radioactive Substance". Proceedings of the Royal Society of London. 84 (571): 325–344. Bibcode:1910RSPSA..84..325S. Zeeman, Pieter (1918) "Some experiments on gravitation: The ratio of mass to weight for crystals and radioactive substances", Proceedings of the Koninklijke Nederlandse Akademie van Wetenschappen, Amsterdam 20(4) 542–553 Potter, Harold H. (1923). "Some Experiments on the Proportionality of Mass and Weight". Proceedings of the Royal Society of London. 104 (728): 588–610. Bibcode:1923RSPSA.104..588P. doi:10.1098/ Renner, János (1935). "Kísérleti vizsgálatok a tömegvonzás és tehetetlenség arányosságáról". Mathematikai és Természettudományi Értesítő. 53: 569. Braginski, Vladimir Borisovich; Panov, Vladimir Ivanovich (1971). "Журнал Экспериментальной и Теоретической Физики". (Zhurnal Éksperimental'noĭ I Teoreticheskoĭ Fiziki, Journal of Experimental and Theoretical Physics). 61: 873. Shapiro, Irwin I.; Counselman, III; Charles, C.; King, Robert W. (1976). "Verification of the principle of equivalence for massive bodies". Physical Review Letters. 36 (11): 555–558. Bibcode:1976PhRvL..36..555S. doi:10.1103/physrevlett.36.555. Archived from the original on 22 January 2014. Keiser, George M.; Faller, James E. (1979). Bulletin of the American Physical Society. 24: 579. Missing or empty |title= (help) Niebauer, Timothy M.; McHugh, Martin P.; Faller, James E. (1987). "Galilean test for the fifth force". Physical Review Letters (Submitted manuscript). 59 (6): 609–612. Bibcode:1987PhRvL..59..609N. doi:10.1103/physrevlett.59.609. PMID 10035824. Stubbs, Christopher W.; Adelberger, Eric G.; Heckel, Blayne R.; Rogers, Warren F.; Swanson, H. Erik; Watanabe, R.; Gundlach, Jens H.; Raab, Frederick J. (1989). "Limits on Composition-Dependent Interactions Using a Laboratory Source: Is There a "Fifth Force" Coupled to Isospin?". Physical Review Letters. 62 (6): 609–612. Bibcode:1989PhRvL..62..609S. doi:10.1103/physrevlett.62.609. PMID Adelberger, Eric G.; Stubbs, Christopher W.; Heckel, Blayne R.; Su, Y.; Swanson, H. Erik; Smith, G. L.; Gundlach, Jens H.; Rogers, Warren F. (1990). "Testing the equivalence principle in the field of the Earth: Particle physics at masses below 1 μeV?". Physical Review D. 42 (10): 3267–3292. Bibcode:1990PhRvD..42.3267A. doi:10.1103/physrevd.42.3267. PMID 10012726. Baeßler, Stefan; et al. (2001). "Remarks by Heinrich Hertz (1857-94) on the equivalence principle". Classical and Quantum Gravity. 18 (13): 2393. Bibcode:2001CQGra..18.2393B. doi:10.1088/0264-9381/18 Baeßler, Stefan; Heckel, Blayne R.; Adelberger, Eric G.; Gundlach, Jens H.; Schmidt, Ulrich; Swanson, H. Erik (1999). "Improved Test of the Equivalence Principle for Gravitational Self-Energy". Physical Review Letters. 83 (18): 3585. Bibcode:1999PhRvL..83.3585B. doi:10.1103/physrevlett.83.3585. Touboul, Pierre; Métris, Gilles; Rodrigues, Manuel; André, Yves; Baghi, Quentin; Bergé, Joël; Boulanger, Damien; Bremer, Stefanie; Carle, Patrice; Chhun, Ratana; Christophe, Bruno; Cipolla, Valerio; Damour, Thibault; Danto, Pascale; Dittus, Hansjoerg; Fayet, Pierre; Foulon, Bernard; Gageant, Claude; Guidotti, Pierre-Yves; Hagedorn, Daniel; Hardy, Emilie; Huynh, Phuong-Anh; Inchauspe, Henri; Kayser, Patrick; Lala, Stéphanie; Lämmerzahl, Claus; Lebat, Vincent; Leseur, Pierre; Liorzou, Françoise; et al. (2017). "MICROSCOPE Mission: First Results of a Space Test of the Equivalence Principle". Physical Review Letters. 119 (23): 231101.arXiv:1712.01176. Bibcode:2017PhRvL.119w1101T. doi:10.1103/PhysRevLett.119.231101. PMID 29286705. S2CID 6211162. Dittus, Hansjörg; Lāmmerzahl, Claus (2005). "Experimental Tests of the Equivalence Principle and Newton's Law in Space" (PDF). Gravitation and Cosmology: 2nd Mexican Meeting on Mathematical and Experimental Physics, AIP Conference Proceedings. 758: 95. Bibcode:2005AIPC..758...95D. doi:10.1063/1.1900510. Archived from the original (PDF) on 17 December 2008. Kimura, M.; Aghion, S.; Amsler, C.; Ariga, A.; Ariga, T.; Belov, A.; Bonomi, G.; Bräunig, P.; Bremer, J.; Brusa, R. S.; Cabaret, L.; Caccia, M.; Caravita, R.; Castelli, F.; Cerchiari, G.; Chlouba, K.; Cialdi, S.; Comparat, D.; Consolati, G.; Demetrio, A.; Derking, H.; Di Noto, L.; Doser, M.; Dudarev, A.; Ereditato, A.; Ferragut, R.; Fontana, A.; Gerber, S.; Giammarchi, M.; et al. (2015). "Testing the Weak Equivalence Principle with an antimatter beam at CERN". Journal of Physics: Conference Series. 631 (1): 012047. Bibcode:2015JPhCS.631a2047K. doi:10.1088/1742-6596/631/1/012047. Overduin, James; Everitt, Francis; Mester, John; Worden, Paul (2009). "The Science Case for STEP". Advances in Space Research. 43 (10): 1532–1537.arXiv:0902.2247. Bibcode:2009AdSpR..43.1532O. doi:10.1016/j.asr.2009.02.012. S2CID 8019480. Haugen, Mark P.; Lämmerzahl, Claus (2001). Principles of Equivalence: Their Role in Gravitation Physics and Experiments that Test Them. Gyros. 562. pp. 195–212.arXiv:gr-qc/0103067. Bibcode:2001LNP...562..195H. doi:10.1007/3-540-40988-2_10. ISBN 978-3-540-41236-6. S2CID 15430387. Hadley, Mark J. (1997). "The Logic of Quantum Mechanics Derived from Classical General Relativity". Foundations of Physics Letters. 10 (1): 43–60.arXiv:quant-ph/9706018. Bibcode:1997FoPhL..10...43H. CiteSeerX 10.1.1.252.6335. doi:10.1007/BF02764119. S2CID 15007947. Durand, Stéphane (2002). "An amusing analogy: modelling quantum-type behaviours with wormhole-based time travel". Journal of Optics B: Quantum and Semiclassical Optics. 4 (4): S351–S357. Bibcode:2002JOptB...4S.351D. doi:10.1088/1464-4266/4/4/319. Webb, John K.; Murphy, Michael T.; Flambaum, Victor V.; Dzuba, Vladimir A.; Barrow, John D.; Churchill, Chris W.; Prochaska, Jason X.; Wolfe, Arthur M. (2001). "Further Evidence for Cosmological Evolution of the Fine Structure Constant". Physical Review Letters. 87 (9): 091301.arXiv:astro-ph/0012539. Bibcode:2001PhRvL..87i1301W. doi:10.1103/PhysRevLett.87.091301. PMID 11531558. S2CID Ransom, Scott M.; et al. (2014). "A millisecond pulsar in a stellar triple system". Nature. 505 (7484): 520–524.arXiv:1401.0535. Bibcode:2014Natur.505..520R. doi:10.1038/nature12917. PMID 24390352. S2CID 4468698. Anne M. Archibald; et al. (4 July 2018). "Universality of free fall from the orbital motion of a pulsar in a stellar triple system". Nature. 559 (7712): 73–76.arXiv:1807.02059. Bibcode:2018Natur.559...73A. doi:10.1038/s41586-018-0265-1. PMID 29973733. S2CID 49578025. "Even Phenomenally Dense Neutron Stars Fall like a Feather – Einstein Gets It Right Again". Charles Blue, Paul Vosteen. NRAO. 4 July 2018. Chae, Kyu-Hyun, et al. (2020), "Testing the Strong Equivalence Principle: Detection of the External Field Effect in Rotationally Supported Galaxies" Applied Physics Letters (publication forthcoming) Webb, John K.; King, Julian A.; Murphy, Michael T.; Flambaum, Victor V.; Carswell, Robert F.; Bainbridge, Matthew B. (2010). "Evidence for spatial variation of the fine structure constant". Physical Review Letters. 107 (19): 191101.arXiv:1008.3907. Bibcode:2011PhRvL.107s1101W. doi:10.1103/PhysRevLett.107.191101. PMID 22181590. S2CID 23236775. Wright, Karen (1 March 2001). "Very Dark Energy". Discover Magazine. Eöt–Wash group "Fundamental Physics of Space - Technical Details". Archived from the original on 28 November 2016. Retrieved 7 May 2005. Viswanathan, V; Fienga, A; Minazzoli, O; Bernus, L; Laskar, J; Gastineau, M (May 2018). "The new lunar ephemeris INPOP17a and its application to fundamental physics". Monthly Notices of the Royal Astronomical Society. 476 (2): 1877–1888.arXiv:1710.09167. Bibcode:2018MNRAS.476.1877V. doi:10.1093/mnras/sty096. S2CID 119454879. ""GALILEO GALILEI" GG Small Mission Project". "S T e P". "Archived copy". Archived from the original on 27 February 2015. Retrieved 7 May 2005. "Archived copy". Archived from the original on 7 May 2005. Retrieved 7 May 2005. 16 November 2004, physicsweb: Equivalence principle passes atomic test Dicke, Robert H.; "New Research on Old Gravitation", Science 129, 3349 (1959). This paper is the first to make the distinction between the strong and weak equivalence principles. Dicke, Robert H.; "Mach's Principle and Equivalence", in Evidence for gravitational theories: proceedings of course 20 of the International School of Physics "Enrico Fermi", ed. C. Møller (Academic Press, New York, 1962). This article outlines the approach to precisely testing general relativity advocated by Dicke and pursued from 1959 onwards. Einstein, Albert; "Über das Relativitätsprinzip und die aus demselben gezogene Folgerungen", Jahrbuch der Radioaktivitaet und Elektronik 4 (1907); translated "On the relativity principle and the conclusions drawn from it", in The collected papers of Albert Einstein. Vol. 2 : The Swiss years: writings, 1900–1909 (Princeton University Press, Princeton, New Jersey, 1989), Anna Beck translator. This is Einstein's first statement of the equivalence principle. Einstein, Albert; "Über den Einfluß der Schwerkraft auf die Ausbreitung des Lichtes", Annalen der Physik 35 (1911); translated "On the Influence of Gravitation on the Propagation of Light" in The collected papers of Albert Einstein. Vol. 3 : The Swiss years: writings, 1909–1911 (Princeton University Press, Princeton, New Jersey, 1994), Anna Beck translator, and in The Principle of Relativity, (Dover, 1924), pp 99–108, W. Perrett and G. B. Jeffery translators, ISBN 0-486-60081-5. The two Einstein papers are discussed online at The Genesis of General Relativity. Brans, Carl H.; "The roots of scalar-tensor theory: an approximate history",arXiv:gr-qc/0506063. Discusses the history of attempts to construct gravity theories with a scalar field and the relation to the equivalence principle and Mach's principle. Misner, Charles W.; Thorne, Kip S.; and Wheeler, John A.; Gravitation, New York: W. H. Freeman and Company, 1973, Chapter 16 discusses the equivalence principle. Ohanian, Hans; and Ruffini, Remo; Gravitation and Spacetime 2nd edition, New York: Norton, 1994, ISBN 0-393-96501-5 Chapter 1 discusses the equivalence principle, but incorrectly, according to modern usage, states that the strong equivalence principle is wrong. Uzan, Jean-Philippe; "The fundamental constants and their variation: Observational status and theoretical motivations", Reviews of Modern Physics 75, 403 (2003).arXiv:hep-ph/0205340 This technical article reviews the best constraints on the variation of the fundamental constants. Will, Clifford M.; Theory and experiment in gravitational physics, Cambridge, UK: Cambridge University Press, 1993. This is the standard technical reference for tests of general relativity. Will, Clifford M.; Was Einstein Right?: Putting General Relativity to the Test, Basic Books (1993). This is a popular account of tests of general relativity. Will, Clifford M.; The Confrontation between General Relativity and Experiment, Living Reviews in Relativity (2006). An online, technical review, covering much of the material in Theory and experiment in gravitational physics. The Einstein and strong variants of the equivalence principles are discussed in sections 2.1 and 3.1, respectively. Friedman, Michael; Foundations of Space-Time Theories, Princeton, New Jersey: Princeton University Press, 1983. Chapter V discusses the equivalence principle. Ghins, Michel; Budden, Tim (2001), "The Principle of Equivalence", Stud. Hist. Phil. Mod. Phys., 32 (1): 33–51, Bibcode:2001SHPMP..32...33G, doi:10.1016/S1355-2198(00)00038-1 Ohanian, Hans C. (1977), "What is the Principle of Equivalence?", American Journal of Physics, 45 (10): 903–909, Bibcode:1977AmJPh..45..903O, doi:10.1119/1.10744 Di Casola, E.; Liberati, S.; Sonego, S. (2015), "Nonequivalence of equivalence principles", American Journal of Physics, 83 (1): 39,arXiv:1310.7426, Bibcode:2015AmJPh..83...39D, doi:10.1119/ 1.4895342, S2CID 119110646 al Relativity Special relativity Principle of relativity (Galilean relativity Galilean transformation) Special relativity Doubly special relativity Frame of reference Speed of light Hyperbolic orthogonality Rapidity Maxwell's equations Proper length Proper time Relativistic mass Lorentz transformation Time dilation Mass–energy equivalence Length contraction Relativity of simultaneity Relativistic Doppler effect Thomas precession Ladder paradox Twin paradox Light cone World line Minkowski diagram Biquaternions Minkowski space General relativity Introduction Mathematical formulation Equivalence principle Riemannian geometry Penrose diagram Geodesics Mach's principle ADM formalism BSSN formalism Einstein field equations Linearized gravity Post-Newtonian formalism Raychaudhuri equation Hamilton–Jacobi–Einstein equation Ernst equation Black hole Event horizon Singularity Two-body problem Gravitational waves: astronomy detectors (LIGO and collaboration Virgo LISA Pathfinder GEO) Hulse–Taylor binary Other tests: precession of Mercury lensing redshift Shapiro delay frame-dragging / geodetic effect (Lense–Thirring precession) pulsar timing arrays Brans–Dicke theory Kaluza–Klein Quantum gravity Cosmological: Friedmann–Lemaître–Robertson–Walker (Friedmann equations) Kasner BKL singularity Gödel Milne Spherical: Schwarzschild (interior Tolman–Oppenheimer–Volkoff equation) Reissner–Nordström Lemaître–Tolman Axisymmetric: Kerr (Kerr–Newman) Weyl−Lewis−Papapetrou Taub–NUT van Stockum dust discs Others: pp-wave Ozsváth–Schücking metric Poincaré Lorentz Einstein Hilbert Schwarzschild de Sitter Weyl Eddington Friedmann Lemaître Milne Robertson Chandrasekhar Zwicky Wheeler Choquet-Bruhat Kerr Zel'dovich Novikov Ehlers Geroch Penrose Hawking Taylor Hulse Bondi Misner Yau Thorne Weiss others Theory of relativity Hellenica World - Scientific Library Retrieved from "http://en.wikipedia.org/" All text is available under the terms of the GNU Free Documentation License
{"url":"https://www.hellenicaworld.com/Science/Physics/en/Equivalenceprinciple.html","timestamp":"2024-11-07T07:42:15Z","content_type":"application/xhtml+xml","content_length":"63252","record_id":"<urn:uuid:b2872d7d-7caa-4c46-a880-6aab7f1cedd9>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00438.warc.gz"}
FOIL Calculator An online foil calculator determines the multiplication of two different binomials by using the foil method. Additionally, this foil method calculator displays step-by-step simplification of given expressions. Here we have lots of informative stuff related to the foil method, let’s get a jump start with some basics. What is the Foil Method? In Algebra, the FOIL is a standard method for multiplying two expressions. The word FOIL for the four terms of the product is: • The first means multiply the first term of each binomial. • Outer means multiply the first term of the first binomial and the second term of the second binomial. • Inner means the second term of the first expression and the first term of the second expression. • Last means multiply the last terms in the expression. Once the process is finished, then simplify the binomial. However, an Online Prime Factorization Calculator makes prime factors of any number, create a list of all prime numbers up to any number Multiply the binomials using the foil method: (2x + 1) (5x + 7) By using the FOIL method: Simplify the algebraic expressions: =10x^2+14x+5x+7 (2x + 1) (5x + 7) = 10x^2+19x+7 Multiply the following: (4x−5) (x−7) Just follow the letters in FOIL: First: 4x∗x=4x^2 Outside: 4x∗(−7)=−28x Inside: −5∗x=−5x Last: (−5)∗(−7)=35 Sum it all up and you get: (4x^2−33x+35). However, An Online Factoring Calculator helps to factor any expression (polynomial, binomial, trinomial). The Distributive law: The FOIL method is similar to the two-step procedure of the distributive law: In the first step, the (y + z) is distributed over the sum in the first expression. In the second step, the distributive law is applied to simplify each term of the two binomials. Also, this method requires a total of three applications of the distribution property. In contrast to the FOIL, the distributive method can be applied without any difficulty to multiplications with more binomials such as trinomials. How Foil Calculator Works? The online foil method calculator provides the product of two terms and simplifies them using distributive law with these steps: • Enter the two binomials in the box. • Hit the calculate button to see the results. • The foil calculator provides the answer using the foil method. • Display step-wise solution. • You can do foil math numerous times by clicking the re-calculate button. How can I simplify an equation? • First of all, remove the parentheses with the multiplication of factors. • Then, combine like factors by adding coefficients. • Now, combine the constants. How we can foil the trinomials? The multiplication of trinomials first foils out factored terms by multiplying every term in one trinomial to every term in the other trinomial. What is the reverse foil method? Reverse foil is another process of factoring the quadratic trinomials by trial-and-error. The process is to find the First terms and Last terms of each expression in the factored product so the Outer products and Inner products are added to the middle terms. Use this Foil calculator for the product of two factors. So, the FOIL method is used to remember the required steps to multiply the two expressions. Remember that when you want to multiply two binomials together you must multiply the numbers and add the exponents. To make it convenient for you, our online foiling calculator does all calculations quickly which is equally beneficial for beginners and professionals. From the source of Wikipedia: The distributive law, Reverse FOIL, Table as an alternative to FOIL, Generalizations. From the source of Chili math: FOIL Method, Multiply Binomials using the FOIL Method. From the source of Purple Math: Use foil to simplify, Multiply and simplify, Multiplying Binomials: "foil" (and a warning).
{"url":"https://calculator-online.net/foil-calculator/","timestamp":"2024-11-04T05:17:11Z","content_type":"text/html","content_length":"60068","record_id":"<urn:uuid:a0d3ae92-ee48-4161-bbbb-c8bab82ae3b9>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00342.warc.gz"}
Variants of vertex and edge colorings of graphs Date of Submission Institute Name (Publisher) Indian Statistical Institute Document Type Doctoral Thesis Degree Name Doctor of Philosophy Subject Name Computer Science Computer Science Unit (CSU-Chennai) Francis, Mathew C. (CSU-Chennai; ISI) Abstract (Summary of the Work) A k-linear coloring of a graph G is an edge coloring of G with k colors so that each color class forms a linear forest—a forest whose each connected component is a path. The linear arboricity χ′ l(G) of G is the minimum integer k such that there exists a k-linear coloring of G. Akiyama, Exoo and Harary conjectured in 1980 that for every graph G, χ′ l(G) ≤ l∆(G)+1 2 m where ∆(G) is the maximum degree of G. First, we prove the conjecture for 3-degenerate graphs. This establishes the conjecture for graphs of treewidth at most 3 and provides an alternative proof for the conjecture in some classes of graphs like cubic graphs and triangle-free planar graphs for which the conjecture was already known to be true. Next, we prove that for every 2-degenerate graph G, χ′ l(G) = l∆(G) 2 m if ∆ (G) ≥ 5. We conjecture that this equality holds also when ∆(G) ∈ {3, 4} and show that this is the case for some well-known subclasses of 2-degenerate graphs. All the above proofs can be converted into linear time algorithms that produce linear colorings of input 3-degenerate and 2-degenerate graphs using a number of colors matching the upper bounds on linear arboricity proven for these classes of graphs. Motivated by this, we then show that for every 3-degenerate graph, χ′ l(G) = l∆(G) 2 m if ∆(G) ≥ 9. Further, we show that this line of reasoning can be extended to obtain a different proof for the linear arboricity conjecture for all 3-degenerate graphs. This proof has the advantage that it gives rise to a simpler linear time algorithm for obtaining a linear coloring of an input 3-degenerate graph G using at most one more color than the linear arboricity of G. A p-centered coloring of a graph G, where p is a positive integer, is a coloring of the vertices of G in such a way that every connected subgraph of G either contains a vertex with a unique color or contains more than p different colors. As p increases, we get a hierarchy of more and more restricted colorings, starting from proper vertex colorings, which are exactly the 1-centered colorings. Debski, Felsner, Micek and Schroder proved that bounded degree graphs have p-centered colorings using O (p) colors. But since their method is based on the technique of entropy compression, it cannot be used to obtain a description of an explicit coloring even for relatively simple graphs. In fact, they ask if an explicit p-centered coloring using O(p) colors can be constructed for the planar grid. We answer their question by demonstrating a construction for obtaining such a coloring for the planar ProQuest Collection ID: https://www.proquest.com/pqdtlocal1010185/dissertations/fromDatabasesLayer?accountid=27563 DSpace Identifier Recommended Citation Pattanayak, Drimit Dr., "Variants of vertex and edge colorings of graphs" (2024). Doctoral Theses. 474.
{"url":"https://digitalcommons.isical.ac.in/doctoral-theses/474/","timestamp":"2024-11-02T00:05:15Z","content_type":"text/html","content_length":"41557","record_id":"<urn:uuid:1df2f8d6-1b40-47a5-9c5c-08ad45bbec96>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00713.warc.gz"}
Listing every possible combination of a set The problem is simple. Say you have a set of 12 elements. You want to find and to list every possible unique combination of those elements, irrespective of the ordering within each combination. The number of elements making up each combination can range between 1 and 12. Thanks to the demands of some university work, I've written a script that does just this (written in PHP). Whack it on your web server (or command-line), give it a spin, hack away at it, and use it to your heart's content. List of all possible combinations The most important trick with this problem was to find only the possible combinations (i.e. unique sets irrespective of order), rather than all possible permutations (i.e. unique sets where ordering matters). With my first try, I made a script that first found all possible permutations, and that then culled the list down to only the unique combinations. Since the number of possible permutations is monumentally greater than the number of combinations for a given set, this quickly proved unwieldy: the script was running out of memory with a set size of merely 7 elements (and that was after I increased PHP's memory limit to 2GB!). The current script uses a more intelligent approach in order to only target unique combinations, and (from my testing) it's able to handle a set size of up to ~15 elements. Still not particularly scalable, but it was good enough for my needs. Unfortunately, both permutations and combinations increase factorially in relation to the set size; and if you know anything about computational complexity, then you'll know that an algorithm which runs in factorial time is about the least scalable type of algorithm that you can write. This script produces essentially equivalent output to this "All Combinations" applet, except that it's an open-source customisable script instead of a closed-source proprietary applet. I owe some inspiration to the applet, simply for reassuring me that it can be done. I also owe a big thankyou to Dr. Math's Permutations and Combinations, which is a great page explaining the difference between permutations and combinations, and providing the formulae used to calculate the totals for each of them. File attachments
{"url":"https://greenash.net.au/thoughts/2008/10/listing-every-possible-combination-of-a-set/","timestamp":"2024-11-14T07:05:33Z","content_type":"text/html","content_length":"7680","record_id":"<urn:uuid:27ec8ecb-b647-4677-a0f9-8b3e0942a4df>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00505.warc.gz"}
CPM Homework Help A rectangle has an area of $\frac { 1 } { 6 }$ square centimeters and a length of $1.5$ centimeters. What is the width? What is the perimeter? Here is a diagram of the given information to help you find the perimeter. You will need to find the width first.
{"url":"https://homework.cpm.org/category/ACC/textbook/acc6/chapter/6%20Unit%206/lesson/CC1:%206.2.1/problem/6-70","timestamp":"2024-11-04T02:34:11Z","content_type":"text/html","content_length":"33087","record_id":"<urn:uuid:cec30bd8-3094-435c-9bca-dd6829b04f89>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00225.warc.gz"}
Nearest Habitable Planet candidates 1g Star Ship The acceleration due to gravity on the surface of the Earth (g) is 9.81 meters per second per second. The Propulsion Sytem of a 1g Star Ship Class ship maintains a one g acceleration while in Nearest Habitable Planet candidates There are two habitable planet candidates orbiting Tau Ceti in the constellation Cetus which is directly south of Pisces. Tau Ceti is 11.90 light years from Earth the voyage is under constant one g acceleration flip the ship around at the midpoint initial velocity relative to the Sun is zero final velocity relative to Tau Ceti is zero Simplify the Math measure time in years and distance in light years 1 year = 31,557,600 seconds speed of light = 299,792,458 meters per second = 1 ly / yr 1 light-year = 9,460,730,472,580,800 meters = 1 ly 1 light-year = 5,878,625,000,000 miles = 1 ly one g acceleration = a = 9.81 meters per second per second = 1.03 ly /yr^2 (9.81 m / s^2) x (31,557,600 s / yr) x (31,557,600 s / yr) x (ly / 9,460,730,472,580,800 m ) Tau Ceti is 112,582,692,623,711,520 meters from Earth = 11.90 ly turnover point is at 56,291,346,311,855,760 meters from Earth = 5.95 ly Relativistic Considerations thought about this and calculated the implications Time dilation at constant acceleration - excerpt from: http://en.wikipedia.org/wiki/Time_dilation The Relativistic Rocket - http://math.ucr.edu/home/baez/physics/Relativity/SR/rocket.html Tau Ceti is 11.90 ly away from Earth turnover point is at 5.95 ly ship velocity at turnover point is 99% light speed Earth measures a travel time of 13 years 8 months 2 weeks 1g Star Ship has a travel time of 5 years 2 months 6 days its possible its survivable ambient temperature superconductors are required imagination is a sustainable resource
{"url":"https://public.websites.umich.edu/~jimw/1g-starship/index.htm","timestamp":"2024-11-03T22:35:58Z","content_type":"application/xhtml+xml","content_length":"5017","record_id":"<urn:uuid:82ae0bee-089c-4e19-8d84-a8b89777bda4>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00117.warc.gz"}
28.8 grams to ounces Convert 28.8 Grams to Ounces (gm to oz) with our conversion calculator. 28.8 grams to ounces equals 1.015890048 oz. Enter grams to convert to ounces. Formula for Converting Grams to Ounces: ounces = grams ÷ 28.3495 By dividing the number of grams by 28.3495, you can easily obtain the equivalent weight in ounces. Understanding the Conversion from Grams to Ounces Converting grams to ounces is a common task, especially for those who work with both the metric and imperial systems. The conversion factor between these two units is essential for accurate measurements. One ounce is equivalent to approximately 28.3495 grams. This means that to convert grams to ounces, you will divide the number of grams by this conversion factor. The Formula for Converting Grams to Ounces The formula to convert grams (g) to ounces (oz) is straightforward: Ounces = Grams ÷ 28.3495 Step-by-Step Calculation: Converting 28.8 Grams to Ounces Let’s apply the formula to convert 28.8 grams to ounces: 1. Start with the amount in grams: 28.8 grams. 2. Use the conversion factor: 28.3495. 3. Now, divide 28.8 by 28.3495: 4. 28.8 ÷ 28.3495 ≈ 1.0183. 5. Finally, round the result to two decimal places: 1.02 ounces. The Importance of Grams to Ounces Conversion This conversion is crucial for bridging the gap between the metric and imperial systems, which are used in different parts of the world. Understanding how to convert between these units can help ensure accuracy in various fields, from cooking to scientific research. Practical Examples of Grams to Ounces Conversion 1. Cooking: Many recipes, especially those from the United States, use ounces for ingredient measurements. If you have a recipe that calls for 1 ounce of an ingredient, knowing that this is approximately 28.35 grams can help you adjust your measurements accurately. 2. Scientific Measurements: In laboratories, precise measurements are critical. Scientists often need to convert grams to ounces when collaborating with international teams or when publishing research that may be read by audiences familiar with different measurement systems. 3. Everyday Use: Whether you’re weighing food items, measuring out supplements, or even calculating postage for packages, knowing how to convert grams to ounces can simplify your tasks and improve In conclusion, converting 28.8 grams to ounces results in approximately 1.02 ounces. This simple yet essential conversion can enhance your understanding and application of measurements in various practical scenarios. Here are 10 items that weigh close to 28.8 grams to ounces – • Standard Paperclip Shape: Elongated oval with two loops. Dimensions: Approximately 3 cm in length. Usage: Commonly used to hold sheets of paper together. Fact: A standard paperclip weighs about 1 gram, so you would need 28 paperclips to reach 28.8 grams! • AA Battery Shape: Cylindrical. Dimensions: 5 cm in length and 1.4 cm in diameter. Usage: Used in various electronic devices like remote controls and flashlights. Fact: A typical AA battery weighs around 24 grams, so about 1.2 batteries would equal 28.8 grams. • Small Apple Shape: Round. Dimensions: Approximately 7-8 cm in diameter. Usage: Eaten raw as a snack or used in cooking and baking. Fact: A small apple typically weighs around 150 grams, so you would need about 0.19 of an apple to reach 28.8 grams! • USB Flash Drive Shape: Rectangular with a retractable connector. Dimensions: About 5 cm in length and 2 cm in width. Usage: Used for data storage and transfer between devices. Fact: A typical USB flash drive weighs around 10 grams, so you would need about 2.88 drives to reach 28.8 grams. • Standard Golf Ball Shape: Spherical. Dimensions: Approximately 4.3 cm in diameter. Usage: Used in the sport of golf. Fact: A standard golf ball weighs about 45.93 grams, so you would need about 0.63 of a golf ball to reach 28.8 grams! • Small Pack of Gum (5 pieces) Shape: Rectangular pack. Dimensions: About 7 cm x 5 cm x 1 cm. Usage: Chewed for fresh breath and flavor. Fact: A pack of gum typically weighs around 30 grams, so you would need about 0.96 packs to reach 28.8 grams. • Postage Stamp Booklet (10 stamps) Shape: Rectangular booklet. Dimensions: Approximately 10 cm x 7 cm. Usage: Used for mailing letters and packages. Fact: A booklet of 10 stamps weighs about 20 grams, so you would need about 1.44 booklets to reach 28.8 grams. • Small Keychain Shape: Various shapes, often circular or rectangular. Dimensions: Typically around 5 cm in length. Usage: Used to hold keys together. Fact: A small keychain usually weighs around 15 grams, so you would need about 1.92 keychains to reach 28.8 grams. • Plastic Spoon Shape: Curved with a long handle. Dimensions: About 15 cm in length. Usage: Used for eating or serving food. Fact: A plastic spoon weighs around 5 grams, so you would need about 5.76 spoons to reach 28.8 grams. • Small Notebook Shape: Rectangular. Dimensions: Approximately 10 cm x 15 cm. Usage: Used for writing notes or journaling. Fact: A small notebook typically weighs around 50 grams, so you would need about 0.58 notebooks to reach 28.8 grams! Other Oz <-> Gm Conversions –
{"url":"https://www.gptpromptshub.com/grams-ounce-converter/28-8-grams-to-ounces","timestamp":"2024-11-13T01:57:32Z","content_type":"text/html","content_length":"186428","record_id":"<urn:uuid:15916c6f-9350-4e03-b9ca-b423775b7afd>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00627.warc.gz"}
Free algebraic expression worksheets free algebraic expression worksheets Related topics: ged practice math free printable 6th'7th'8th free math games free download easy way to learn vector integration how to solve and graph a trinomial square root property solved problems in college algebra Algebra Concepts And Applications Book Practice Answers arithmetic progression ti-84 a program to learn exsel formula on line for free answer to math equations how to do algebra for school homework how to translate algebraic expressions Author Message Author Message injeld Posted: Wednesday 03rd of Jan 18:56 ZombieSlojer Posted: Sunday 07th of Jan 15:17 Hi, I am a senior in high school and need major help in Thank you, I would be very relieved to all of you if this free algebraic expression worksheets. My math grades software can help me. I really am very upset about my are awful and I have decided to do something about it. I grades. Can someone give me the link to it? Reg.: 19.05.2003 am looking for a website that will allow me to enter a Reg.: 02.05.2004 question and offers detailed step by step solution; basically it must take me through the entire thing. I really need to improve my grades so please help me out. AllejHat Posted: Friday 05th of Jan 11:52 Gog Posted: Tuesday 09th of Jan 09:26 Dear Friend , don't get stressed . Check out Algebrator is a great software and is certainly worth a https://softmath.com/ordering-algebra.html, try. You will find several interesting stuff there. I use it as https://softmath.com/algebra-features.html and reference software for my math problems and can Reg.: 16.07.2003 https://softmath.com/reviews-of-algebra-help.html. There Reg.: 07.11.2001 say that it has made learning math much more is a utility by name Algebrator offered at all the three enjoyable. websites . This instrument would render all the information that you would need on the title Remedial Algebra. But, ensure that you go through all the lessons intently . 3Di Posted: Wednesday 10th of Jan 11:49 Ashe Posted: Sunday 07th of Jan 08:33 Don’t worry my friend . As what I said, it displays the solution for the problem so you won’t really I agree , a good program can do miracles . I used a few have to copy the answer only but it makes you know but Algebrator is the best . It doesn't make a difference Reg.: 04.04.2005 how did the program came up with the answer. Just go what class you are in, I myself used it in Intermediate to this page Reg.: 08.07.2001 algebra and Basic Math as well , so you don't have to https://softmath.com/algebra-software-guarantee.html worry that it's not on your level. If you never used a and prepare to learn and solve faster . program until now I can tell you it's very easy , you don't have to know anything about the computer to use it. You just have to type in the keywords of the exercise, and then the software solves it step by step, so you get more than just the answer.
{"url":"https://softmath.com/parabola-in-math/converting-decimals/free-algebraic-expression.html","timestamp":"2024-11-09T04:00:26Z","content_type":"text/html","content_length":"51305","record_id":"<urn:uuid:ecacbecb-8da6-4439-8bd1-9fad2e1b1064>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00604.warc.gz"}
These tutorials provide narrative explanations, sample code, and expected output for the most common MNE-Python analysis tasks. The emphasis here is on thorough explanations that get you up to speed quickly, at the expense of covering only a limited number of topics. The sections and tutorials are arranged in a fixed order, so in theory a new user should be able to progress through in order without encountering any cases where background knowledge is assumed and unexplained. More experienced users (i.e., those with significant experience analyzing EEG/MEG signals with different software) can probably skip around to just the topics they need without too much trouble. If tutorial-scripts contain plots and are run locally, using the interactive flag with python -i tutorial_script.py keeps them open.
{"url":"https://mne.tools/stable/auto_tutorials/index.html","timestamp":"2024-11-11T04:56:37Z","content_type":"text/html","content_length":"127924","record_id":"<urn:uuid:571e3e23-8a73-4f38-a91d-7026d03468f7>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00391.warc.gz"}
Matrix - Printable Version Matrix - Richard Berler - 08-18-2013 How do I enter matrices, dimension them, designate a result matrix? I can do this on the 15C, but it's not obvious to me how to accomplish this on the 34s! Re: Matrix - Walter B - 08-18-2013 Please see pp. 35f of TFM. Continue reading (as indicated) on p. 144 and p. 101 for LINEQS. Edited: 18 Aug 2013, 1:16 p.m. Re: Matrix - Thomas Klemm - 08-18-2013 Without reading the aforementioned manual: a matrix descriptor consistst of three numbers: the register i where to start storing the elements, the number of rows r and the number of columns c. These three numbers are merged into one using the format i.rrcc. So for instance 5.0203 is a matrix descriptor for a 2x3 matrix whose elements are stored in register 05 and the following. This is similar to the numbers we use for ISG and DSE. The simplest way to enter a matrix is using the program MED. Use up- and down-arrows to navigate through the elements. But you can just as well store the numbers in the correct registers. Say you want to find the inverse of [[1 2][3 4]]: 1 STO 00 2 STO 01 3 STO 02 4 STO 03 0.0202 M^-1 RCL 00 -> -2 RCL 01 -> 1 RCL 02 -> 1.5 RCL 03 -> -0.5 Compared to the HP-15C you have to make sure the matrix descriptors don't specify overlapping regions. But you are not restricted to use just five matrix descriptors. Kind regards Re: Matrix - Paul Dale - 08-18-2013 If the column count is missing, the matrix defaults to square. Thus the descriptor here could be just 0.02. - Pauli
{"url":"https://archived.hpcalc.org/museumforum/printthread.php?tid=248421","timestamp":"2024-11-14T03:27:52Z","content_type":"application/xhtml+xml","content_length":"4708","record_id":"<urn:uuid:f8b5b836-2292-43e2-a960-1bf7bdb0d5df>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00841.warc.gz"}
Role of potential structure in nonadiabatic collisions with applications to He++Ne(2p6)→He++Ne(2p53s) and Na+I→Na++I- The first-order functional-sensitivity densities δσ12(E)/δVij(R) from close-coupling calculations are used for a quantitative probe of the role of structure in crossing diabatic curves used to model nonadiabatic collisions. Application to the excitation of Ne by He+ shows a region of significance for δσ12(E)/δV12(R) as a prominent Gaussian-like profile around the crossing point (R*) in accord with the δ(R-R*) idealization of the Landau-Zener-Stueckelberg (LZS) theory. Similarly, the densities δσ12(E)/δV11(R) and δσ12(E)/δV22(R) mimic dδ(R-R*)/dR-type behavior with one being the negative of the other in the neighborhood of R*, in qualitative agreement with the LZS theory. However, all three sensitivity profiles identify a much broader area of importance for the curves than the loosely defined avoided-crossing region. Also, although the sensitivities themselves decrease with increasing energy, the domain of importance of the curves increases. Examination of the functional-sensitivity densities δσ12(E)/δVij(R) for the chemi-ionization collision Na+I→Na++I- reveals regions of potential-function importance very different from that predicted by the LZS theory. The chemi-ionization cross section is about ten times more sensitive to the ionic curve than the covalent curve. Also, the domain of sensitivity of the ionic curve is larger compared to that of the covalent curve. The density δσ12(E)/δV12(R) for chemi-ionization shows that the area of maximum potential significance is not at the crossing point itself but the regions bracketing it on both sides. Also, the dominant sign dependence of the coupling sensitivity is unexpectedly negative. The results offer other observations about the domain of validity of the intuitive pictures rooted in the LZS theory. The significance of these results to the inversion of inelastic cross-section data is briefly discussed. All Science Journal Classification (ASJC) codes • Atomic and Molecular Physics, and Optics Dive into the research topics of 'Role of potential structure in nonadiabatic collisions with applications to He++Ne(2p6)→He++Ne(2p53s) and Na+I→Na++I-'. Together they form a unique fingerprint.
{"url":"https://collaborate.princeton.edu/en/publications/role-of-potential-structure-in-nonadiabatic-collisions-with-appli","timestamp":"2024-11-04T11:39:19Z","content_type":"text/html","content_length":"54817","record_id":"<urn:uuid:5e43a18e-b41e-415b-b364-7f0035519dca>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00190.warc.gz"}
t-SNE vs. SNE — What's the difference? Addressing the crowding problem. The t-SNE algorithm is an improved version of the SNE algorithm, both used for dimensionality reduction. While I assume you know the ins and outs of the SNE and tSNE algorithm (if you don’t, then you can read this detailed article I published once where we also implemented it from scratch)… …the core idea of SNE (not t-SNE) is the following: • Step 1) For every point (i) in the given high-dimensional data, convert the high-dimensional Euclidean distances to all other points (j) into conditional Gaussian probabilities. □ For instance, consider the marked red point in the dataset on the left below. □ Converting Euclidean distance to all other points into Gaussian probabilities (the distribution on the right above) shows that other red points have a higher probability of being its neighbor than other points. • Step 2) For every data point xᵢ, randomly initialize its counterpart yᵢ in 2-dimensional space. These will be our projections. • Step 3) Just like we defined conditional probabilities in the high-dimensional space in Step 1, we define the conditional probabilities in the low-dimensional space, using Gaussian distribution • Step 4) Now, every data point (i) has a high-dimensional probability distribution and a corresponding low-dimensional distribution: □ The objective is to match these two probability distributions. □ Thus, we can make the positions of counterpart yᵢ’s learnable such that this difference is minimized. □ Using KL divergence as a loss function helps us achieve this. It measures how much information is lost when we use distribution Q to approximate distribution P. □ Ideally, we want to have the minimum loss value (which is zero), and this will be achieved when P=Q. The model can be trained using gradient descent, and it works pretty well. For instance, the following image depicts a 2-dimensional visualization produced by the SNE algorithm on 256-dimensional handwritten digits: SNE produces good clusters. What’s even more astonishing is that properties like orientation, skew, and strokethickness vary smoothly across the space within each cluster. This is depicted below: Nonetheless, it has some limitations, which the t-SNE algorithm addresses. Notice that the clusters produced by SNE are not well separated. Here, it could be fair to assume that the original data clusters, the one in the 256-dimensional space, most likely would have been well separated. Thus: • All zeros must have been together but well separated from other digits. • All ones must have been together but well separated from other digits. • And so on. Yet, SNE still produces tightly packed clusters. This is also called the “crowding problem.” To eliminate this problem, t-SNE was proposed, standing for t-distributed Stochastic Neighbor Embedding (t-SNE). Here’s the difference. Recall that in SNE, we used a Gaussian distribution to define the low-dimensional conditional probabilities. But it is not producing well-separated clusters. One solution is to use some other probability distribution, such that for distant points, we get the same value of the conditional probability as we would have obtained from a Gaussian distribution but at a larger Euclidean distance. Let me simplify that a bit. Compare the following two distributions: Notice that Gaussian achieves a specific value of low probability density at a smaller distance. But t-distribution achieves it at a larger distance. This is precisely what we intend to achieve. We need a heavier tail distribution so that we can still minimize the difference between the two probability distributions but at a larger distance in the low-dimensional space. The Student t-distribution is a perfect fit for it. The following image depicts the difference this change brings: As shown above: • SNE produces closely packed clusters. • t-SNE produces well-separated clusters. And that’s why t-distribution is used in t-SNE. That said, besides producing well-separated clusters, using the Student t-distribution has many more advantages. For instance, it is computationally much faster to evaluate the density of a point under a Student t-distribution than under a Gaussian. We went into much more detail in a beginner-friendly manner in the full article: Formulating and Implementing the t-SNE Algorithm From Scratch. Similar to this, we also formulated the PCA algorithm from scratch here: Formulating the Principal Component Analysis (PCA) Algorithm From Scratch. 👉 Over to you: Can you identify a bottleneck in the t-SNE algorithm? Thanks for reading Daily Dose of Data Science! Subscribe for free to learn something new and insightful about Python and Data Science every day. Also, get a Free Data Science PDF (550+ pages) with 320+ tips. Are you overwhelmed with the amount of information in ML/DS? Every week, I publish no-fluff deep dives on topics that truly matter to your skills for ML/DS roles. For instance: Join below to unlock all full articles: Get your product in front of 77,000 data scientists and other tech professionals. Our newsletter puts your products and services directly in front of an audience that matters — thousands of leaders, senior data scientists, machine learning engineers, data analysts, etc., who have influence over significant tech decisions and big purchases. To ensure your product reaches this influential audience, reserve your space here or reply to this email to ensure your product reaches this influential audience. Well said Expand full comment An amazing read, thanks for clearly explaining the concepts of SNE, I honestly was not aware of the how the algorithm works and this newsletter perfectly explained it. Thanks Avi Chawla Expand full comment
{"url":"https://blog.dailydoseofds.com/p/t-sne-vs-sne-whats-the-difference","timestamp":"2024-11-07T23:46:16Z","content_type":"text/html","content_length":"253423","record_id":"<urn:uuid:1c0e134d-5d41-491a-a688-5e39e0ee9891>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00639.warc.gz"}
Standard Deviation The standard deviation (StandardDeviation) (std) is a statistic that measures the dispersion of a dataset relative to its mean and is calculated as the square root of the variance (Variance). Standard deviation is a statistical measurement in finance that, when applied to the annual rate of return of an investment, sheds light on the historical volatility of that investment. The greater the standard deviation of a security, the greater the variance between each price and the mean, which shows a larger price range. In the case of a set of N values and the arithmetic mean , the sample (unbiased) standard deviation is The population standard deviation is differ only in division factor: 1// Create new instance 2var indicator = new StandardDeviation(28); 3 4// Number of stored values 5indicator.HistoryCapacity = 2; 6 7// Add new data point 910// Get indicator value 11double IndicatorValue = indicator.Std; 12// Get previous value 13if (indicator.HistoryCount == 2) 15double IndicatorPrevValue = indicator[1];
{"url":"https://rtmath.net/assets/docs/finanalysis/html/fac9e789-b33b-4bf0-b46b-815be26d85d8.htm","timestamp":"2024-11-05T22:59:20Z","content_type":"text/html","content_length":"23228","record_id":"<urn:uuid:3d0eb96d-8193-4191-994f-e0c86df10c85>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00653.warc.gz"}
How to Add Prefix or Suffix in Excel Sometimes, you may find the need to add a common text string at beginning or end of all cells in an Excel spreadsheet. Instead of doing this one cell at a time, you can quickly add Prefix or Suffix in Excel by using steps as provided below. Add Prefix or Suffix in Excel As mentioned above, manually adding a common Prefix or Suffix to all cells or a group of cells in Microsoft Excel can be time consuming. If you are dealing with a large spreadsheet, the act of adding Suffix or Prefix to each and every cell of an Excel spreadsheet can take a very long time and leave you frustrated and tired. Hence, we are providing below the steps to quickly add Suffix or Prefix in Excel spreadsheet using “&” Operator and Concatenate Function as available in Microsoft Excel. Add Prefix in Excel Using “&” Operator Perhaps the easiest way to add Prefix or Suffix in Excel is to make use of the “&” Operator. In order to explain this better, let us assume that you have an Excel spreadsheet containing names of doctors in Column A and the task is to add the Prefix Dr. to each and every name in Column A. To add the Prefix (Dr.), place the cursor at Column B, type =”Dr. “&A4 and hit the enter key on the keyboard of your computer. Tip: Instead of typing A4 you can type =”Dr. “& > move the cursor to cell A4 and hit the enter key. After adding the Prefix (Dr.) to the first cell, you can quickly Add Prefix to all the cells by dragging the formula down to all the cells in column B (See image below). Add Suffix in Excel Using “&” Operator In this case, let us assume that you are required to Add the Suffix “PHD.” to all the cells in Column B, so that names read in the format Dr. Name, PHD. To add Suffix, place the cursor in Column C, type =B4&”, PHD.” and hit the enter key on the keyboard of your computer. Tip: Instead of typing B4, you can type = Move the cursor to cell B4, type &”, PHD. and hit the enter key. After adding the suffix (PHD.) to the first cell, you can quickly add this common Suffix to all the other Cells by dragging the formula down to all the Cells in column C (See image below). Add Prefix in Excel Using Concatenate Function Another way to Add Prefix or Suffix to a group of Cells in Excel is to make use of the “Concatenate” function as available in Microsoft Excel. To Add Prefix (Dr.) using Concatenate function, type =Concatenate(“Dr. “,A4) and hit the enter key on the keyboard of your computer. Tip: Instead of typing A4 in above formula, you can move the cursor to A4 cell Once Prefix is added to the first cell, you can quickly add this common Prefix to all the remaining Cells in the Excel spreadsheet by dragging the formula to all the remaining cells. Add Suffix in Excel Using Concatenate Function Again using the above example, let us add the Suffix “PHD.” to the end of all the names in Column B using Concatenate function. To do this, place the cursor in Column C and type =Concatenate(B4,” PHD.”) and hit the enter key on the keyboard of your computer. After adding the Suffix in the first cell, you can quickly add the Suffix to all the remaining cells by dragging the formula to all the remaining cells.
{"url":"https://www.techbout.com/add-prefix-suffix-in-excel-41101/","timestamp":"2024-11-04T05:23:55Z","content_type":"text/html","content_length":"31907","record_id":"<urn:uuid:e113e5a3-2733-4093-8e64-cee2199d5e8c>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00396.warc.gz"}
Mixture models: clustering or density estimation Sent to you by jeffye via Google Reader: My colleague Suresh Venkatasubramanian is running as seminar on this semester. Last week we discussed EM and mixture of Gaussians. I almost skipped because it's a relatively old hat topic for me (how many times have I given this lecture?!), and had some grant stuff going out that day. But I decided to show up anyway. I'm glad I did. We discussed a lot of interesting things, but something that had been bugging me for a while finally materialized in a way about which I can be precise. I basically have two (purely qualitative) issues with mixture of Gaussians as a clustering method. (No, I'm not actually suggesting there's anything wrong with using it in practice.) My first complaint is that many times, MoG is used to get the cluster assignments, or to get soft-cluster assignments... but this has always struck me as a bit weird because then we should be maximizing over the cluster assignments and doing expectations over everything else. Max Welling has done some work related to this in the Bayesian setting. (I vaguely remember that someone else did basically the same thing at basically the same time, but can't remember any more who it was.) But my more fundamental question is this. When we start dealing with MoG, we usually say something like... suppose we have a density F which can be represented at F = pi_0 F_0 + pi_1 F_1 + ... + pi_K F_K, where the pis give a convex combination of "simpler" densities F_k. This question arose in the context of density estimation (if my history is correct) and the maximum likelihood solution via expectation maximization was developed to solve the density estimation problem. That is, the ORIGINAL goal in this case was to do density estimation; the fact that "cluster assignments" were produced as a byproduct was perhaps not the original intent. I can actually give a fairly simple example to try to make this point visually. Here is some data generated by a mixture of uniform distributions. And I'll even tell you that K=2 in this case. There are 20,000 points if I recall correctly: Can you tell me what the distribution is? Can you give me the components? Can you give me cluster assignments? The problem is that I've constructed this to be non-identifiable. Here are two ways of writing down the components. (I've drawn this in 2D, but only pay attention to the x dimension.) They give rise to exactly the same distribution. One is equally weighted components, one uniform on the range (-3,1) and one uniform on the range (-1,3). The other is to have to components, one with 2/3 weight on the range (-3,3) and one with 1/3 weight on the range (-1,1). I could imagine some sort of maximum likelihood parameter estimation giving rise to either of these (EM is hard to get to work here because once a point is outside the bounds of a uniform, it has probability zero). They both correctly recover the distribution, but would give rise to totally different (and sort of weird) cluster assignments. I want to quickly point out that this is a very different issue from the standard "non-identifiability in mixture models issue" that has to do with the fact that any permutation of cluster indices gives rise to the same model. So I guess that all this falls under the category of "if you want X, go for X." If you want a clustering, go for a clustering -- don't go for density estimation and try to read off clusters as a by-product. (Of course, I don't entirely believe this, but I still think it's worth thinking about.) Things you can do from here: 0 Comments:
{"url":"http://blog.so8848.com/2009/03/mixture-models-clustering-or-density_07.html","timestamp":"2024-11-05T07:35:13Z","content_type":"text/html","content_length":"61155","record_id":"<urn:uuid:b39b6a82-e902-4c5e-ae96-d3c8af50f70f>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00773.warc.gz"}
Q1 GMAT 650+ Level Set Theory Question | Union of 3 overlapping sets and complement. The question given below is a GMAT quant problem solving question in Set Theory. Concept: Find union of 3 overlapping sets and then its complement. A medium difficulty, GMAT 650+ level, Set Theory sample question. Question 1: In a class of 120 students numbered 1 to 120, all even numbered students opt for Physics, those whose numbers are divisible by 5 opt for Chemistry and those whose numbers are divisible by 7 opt for Math. How many opt for none of the three subjects? 1. 19 2. 41 3. 21 4. 57 5. 26 Get to 705+ in the GMAT Online GMAT Course @ INR 8000 + GST Video Explanation GMAT Live Online Classes Starts Thu, Nov 07, 2024 Explanatory Answer | GMAT Set Theory Practice Question 1 Objective: Compute number of students who opted for none of the three subjects Approach: Let us find the number of students who took at least one of the three subjects and subtract the result from the overall 120 to get the number of students who did not opt for any of the three subjects. Number of students who took at least one of the three subjects can be found by finding out n(A U B U C), where A is the set of students who took Physics, B is the set of students who took Chemistry and C is the set of students who opted for Math. Now, n(A ∪ B ∪ C) = n(A) + n(B) + n(C) - {n(A ∩ B) + n(B ∩ C) + n(C ∩ A)} + n(A ∩ B ∩ C) n(A) is the number of students who opted for Physics = \\frac{120}{2}) = 60 n(B) is the number of students who opted for Chemistry = \\frac{120}{5}) = 24 n(C) is the number of students who opted for Math = \\frac{120}{7}) = 17 Number of students who opted for Physics and Chemistry Students whose numbers are multiples of 2 and 5 i.e., common multiples of 2 and 5 would have opted for both Physics and Chemistry. The LCM of 2 and 5 will be the first number that is a multiple of 2 and 5. i.e., 10 is the first number that will be a part of both the series. The 10^th, 20^th, 30^th..... numbered students or every 10th student starting from student number 10 would have opted for both Physics and Chemistry. Therefore, n(A ∩ B) = \\frac{120}{10}) = 12 Number of students who opted for Physics and Math Students whose numbers are multiples of 2 and 7 i.e., common multiples of 2 and 7 would have opted for both Physics and Math. The LCM of 2 and 7 will be the first number that is a multiple of 2 and 7. i.e., 14 is the first number that will be a part of both the series. The 14^th, 28^th, 42^nd..... numbered students or every 14th student starting from student number 14 would have opted for Physics and Math. Therefore, n(C ∩ A) = \\frac{120}{14}) = 8 Number of students who opted for Chemistry and Math Students whose numbers are multiples of 5 and 7 i.e., common multiples of 5 and 7 would have opted for both Chemistry and Math. The LCM of 5 and 7 will be the first number that is a multiple of 5 and 7. i.e., 35 is the first number that will be a part of both the series. The 35^th, 70^th.... numbered students or every 35th student starting with student number 35 would have opted for Chemistry and Math. Therefore, n(B ∩ C) = \\frac{120}{35}) = 3 Number of students who opted for all three subjects Students whose numbers are multiples of 2, 5, and 7 i.e., common multiples of 2, 5, and 7 would have opted for all 3 subjects. The LCM of 2, 5, and 7 will be the first number that is a multiple of 2, 5, and 7. i.e., 70 is the first number that will be a part of all 3 series. 70 is the only multiple of 70 in the first 120 natural numbers. So, the 70^th numbered student is the only one who would have opted for all three subjects. Therefore, n(A ∪ B ∪ C) = 60 + 24 + 17 - (12 + 8 + 3) + 1 = 79. n(A ∪ B ∪ C) is the number of students who opted for at least one of the 3 subjects. Number of students who opted for none of the three subjects = 120 - n(A ∪ B ∪ C) = 120 - 79 = 41. Choice B is the correct answer.
{"url":"https://practice-questions.wizako.com/gmat/quant/set-theory/union-intersection-3-sets-1.shtml","timestamp":"2024-11-01T23:36:26Z","content_type":"text/html","content_length":"76254","record_id":"<urn:uuid:fc1586a8-dcf7-4be8-8dc9-20b48ef44018>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00108.warc.gz"}
Preparation and photocatalytic activity of TiO2/PPy/GO for the degradation of Rose Bengal and Victoria Blue dye in visible light in aqueous solution 1. J. Kaur, S. Singhal, Heterogeneous photocatalytic degradation of Rose Bengal: effect of operational parameters, Physica B, 450 (2014) 49–53. 2. M.A. Fox, D.F. Duxbury, The photochemistry and photophysics of triphenylmethane dyes in solid and liquid media, Chem. Rev., 93 (1993) 381–433. 3. S.K. Kansal, M. Singh, D. Sud, Studies on photodegradation of two commercial dyes in aqueous phase using different photocatalysts, J. Hazard. Mater., 141 (2007) 581–590. 4. W. Azmi, R.K. Sani, U.C. Banerjee, Biodegradation of triphenylmethane dyes, Enzyme Microb. Technol., 22 (1998) 185–191. 5. O.K. Dalrymple, D.H. Yeh, M.A. Trotz, Removing pharmaceuticals and endocrine-disrupting compounds from wastewater by photocatalysis, J. Chem. Technol. Biotechnol., 82 (2007) 121–134. 6. F. Deng, L. Min, X. Luo, S. Wu, S. Luo, Visible-light photocatalytic degradation performances and thermal stability due to the synergetic effect of TiO[2] with conductive copolymers of polyaniline and polypyrrole, Nanoscale, 5 (2013) 8703–8710. 7. S. Xu, Y. Zhu, L. Jiang, Y. Dan, Visible light induced photocatalytic degradation of methyl orange by polythiophene/TiO[2] composite particles, Water, Air, Soil Pollut., 213 (2010) 151–159. 8. Y. Park, S. Lee, S.O. Kang, W. Choi, Organic dye-sensitized TiO[2] for the redox conversion of water pollutants under visible light, Chem. Commun., 46 (2010) 2477–2479. 9. M. Saquib, M. Muneer, TiO[2]-mediated photocatalytic degradation of a triphenylmethane dye (gentian violet), in aqueous suspensions, Dyes Pigm., 56 (2003) 37–49. 10. X. Li, G. Liu, J. Zhao, Two competitive primary processes in the photodegradation of cationic triaryl methane dyes under visible irradiation in TiO[2] dispersions, New J. Chem., 23 (1999) 11. C.C. Chen, C.S. Lu, Photocatalytic degradation of Basic Violet 4: degradation efficiency, product distribution, and mechanisms, J. Phys. Chem. C, 111 (2007) 13922–13932. 12. M. Okano, K. Itoh, A. Fujishima, K. Honda, Photoelectrochemical polymerization of pyrrole on TiO[2] and its application to conducting pattern generation, J. Electrochem. Soc., 134 (1987) 837–841. 13. B. Wang, C. Li, J. Pang, X. Qing, J. Zhai, Q. Li, Novel polypyrrolesensitized hollow TiO[2]/fly ash cenospheres: synthesis, characterization, and photocatalytic ability under visible light, Appl. Surf. Sci., 258 (2012) 9989–9996. 14. X. Chen, S.S. Mao, Titanium dioxide nanomaterials: synthesis, properties, modifications, and applications, Chem. Rev., 107 (2007) 2891–2959. 15. A.L. Linsebigler, G. Lu, J.T. Yates, Photocatalysis on TiO[2] surfaces: principles, mechanisms, and selected results, Chem. Rev., 95 (1995) 735–758. 16. R.J. Davis, J.L. Gainer, G.O. Neal, I.W. Wu, Photocatalytic decolorization of wastewater dyes, Water Environ. Res., 66 (1994) 50–53. 17. S. Mozia, A.W. Morawski, M. Toyoda, M. Inagaki, Application of anatase-phase TiO[2] for decomposition of azo dye in a photocatalytic membrane reactor, Desalination, 241 (2009) 97–105. 18. H. Tai, Y. Jiang, G. Xie, J. Yu, M. Zhao, Self-assembly of TiO[2]/polypyrrole nanocomposite ultrathin films and application for an NH[3] gas sensor, Int. J. Environ. Anal. Chem., 87 (2007) 19. C.M. Ng, P.C. Chen, S. Manickam, Hydrothermal crystallization of titania on silver nucleation sites for the synthesis of visible light nano-photocatalysts—enhanced photoactivity using Rhodamine 6G, App. Catal., A, 433–434 (2012) 75–80. 20. P.V. Kamat, K. Vinodgopal, D.E. Wynkoop, Environmental photochemistry on semiconductor surfaces: photosensitized degradation of a textile azo dye, acid orange 7, on TiO[2] particles using visible light, Environ. Sci. Technol., 30 (1996) 1660–1666. 21. H. Huang, M. Gan, L. Ma, L. Yu, H. Hu, F. Yang, Y. Li, C. Ge, Fabrication of polyaniline/graphen/titania nanotube arrays nanocomposites and their application in supercapacitors, J. Alloys Compd., 630 (2015) 214–221. 22. Y. Li, Y. Yu, L. Wu, J. Zhi, Processable polyaniline/titania nanocomposites with good photocatalytic and conductivity properties prepared via peroxo-titanium complex catalyzed emulsion polymerization approach, Appl. Surf. Sci., 273 (2013) 135–143. 23. Y. Yang, J. Wen, J. Wei, R. Xiong, J. Shi, C. Pan, Polypyrroledecorated Ag–TiO[2] nanofibers exhibiting enhanced photocatalytic activity under visible-light illumination, Appl. Mater. Interfaces, 5 (2013) 6201–6207. 24. M. Vautier, C. Guillard, J.M. Herrmann, Photocatalytic degradation of dyes in water: case study of Indigo and of Indigo Carmine, J. Catal., 201 (2001) 46–59. 25. G.K. Mor, K. Shankar, M. Paulose, O.K. Varghese, C.A. Grimes, Use of highly-ordered TiO[2] nanotube arrays in dye-sensitized solar cells, Nano Lett., 6 (2006) 215–218. 26. T.L. Thompson, J.T. Yates, Surface science studies of the photoactivation of TiO[2] new photochemical processes, Chem. Rev., 106 (2006) 4428–4453. 27. A. Kaur, Y.R. Smith, V.R. Subramanian, Improved photocatalytic degradation of textile dye using titanium dioxide nanotubes formed over titanium wires, Environ. Sci. Technol., 43 (2009) 3260–3265. 28. W. Baran, A. Makowski, W. Wardas, The influence of FeCl[3] on the photocatalytic degradation of dissolved azo dyes in aqueous TiO[2] suspensions, Chemosphere, 53 (2003) 87–95. 29. S. Wei, P. Mavinakuli, Q. Wang, D. Chen, R. Asapu, Y. Mao, N. Haldolaarachchige, D.P. Young, Z. Guo, Polypyrroletitania nanocomposites derived from different oxidants, J. Electrochem. Soc., 158 (2011) K205–K212. 30. J.S. Miller, Rose Bengal-sensitized photooxidation of 2-chlorophenol in water using solar simulated light, Water Res., 39 (2005) 412–422. 31. B. Pare, P. Singh, S.B. Jonnalgadda, Degradation and mineralization of Victoria Blue B dye in a slurry photo reactor using advanced oxidation process, J. Sci. Ind. Res., 68 (2009) 724–729. 32. J.L. Gole, J.D. Stout, C. Burda, Y. Lou, X. Chen, Highly efficient formation of visible light tunable TiO[2]-XnX photocatalysts and their transformation at the nanoscale, J. Phys. Chem. B, 108 (2004) 1230–1240. 33. J.D. Kwon, P.H. Kim, J.H. Keum, J.S. Kim, Polypyrrole/titania hybrids: synthetic variation and test for the photovoltaic materials, Sol. Energy Mater. Sol. Cells, 83 (2004) 311–321. 34. D. Wang, Y. Wang, X. Li, Q. Luo, J. An, J. Yue, Sunlight photocatalytic activity of polypyrrole–TiO[2] nanocomposites prepared by ‘in situ’ method, Catal. Commun., 9 (2008) 1162–1166. 35. H.C. Liang, X.Z. Li, Visible-induced photocatalytic reactivity of polymer–sensitized titania nanotube films, Appl. Catal., B, 86 (2009) 8–17. 36. C. Ferreira, S. Domenech, P. Lacaze, Synthesis and characterization of poly-pyrrole/TiO[2] composites on mild steel, J. Appl. Electrochem., 31 (2001) 49–56. 37. L. Sun, Y. Shi, B. Li, X. Li, Y. Wang, Preparation and characterization of polypyrrole/TiO[2] nanocomposites by reverse microemulsion polymerization and its photocatalytic activity for the degradation of methyl orange under natural light, Polym. Compos., 34 (2013) 1076–1080. 38. Z. Guo, K. Shin, A.B. Karki, D.P. Young, R.B. Kaner, H.T. Hahn, Fabrication and characterization of iron oxide nanoparticles filled polypyrrole nanocomposites, J. Nanopart. Res., 11 (2009) 39. D.C. Marcano, D.V. Kosynkin, J.M. Berlin, A. Sinitskii, Z. Sun, A. Slesarev, L.B. Alemany, W. Lu, J.M. Tour, Improved synthesis of graphene oxide, ACS Nano, 4 (2010) 4806–4814. 40. F. Denga, Y. Li, X. Luo, L. Yang, X. Tu, Preparation of conductive polypyrrole/TiO[2] nanocomposite via surface molecular imprinting technique and its photocatalytic activity under simulated solar light irradiation, Colloids Surf., A, 395 (2012) 183–189. 41. M.C. Arenas, L.F. Nunez, D. Rangel, O.M. Alvarez, C.M. Alonso, V.M. Castano, Simple one-step ultrasonic synthesis of anatase titania/polypyrrole nanocomposites, Ultrason. Sonochem., 20 (2013) 42. M. Sedla, M. Mrlik, V. Pavlinek, P. Saha, O. Quadrat, Electrorheological properties of suspensions of hollow globular titanium oxide/polypyrrole particles, Colloid. Polym. Sci., 290 (2012) 41–48. 43. K. Singh, R. Bharose, S.K. Verma, V.K. Singh, Potential of powdered activated mustard cake for decolorising raw sugar, J. Sci. Food Agric., 93 (2013) 157–165. 44. H. Lachheb, E. Puzenat, A. Houas, M. Ksibi, E. Elaloui, C. Guillard, J.M. Herrmann, Photocatalytic degradation of various types of dyes (Alizarin S, Crocein Orange G, Methyl Red, Congo Red, Methylene Blue) in water by UV-irradiated titania, Appl. Catal., B, 39 (2002) 75–90. 45. G.A. Epling, C. Lin, Photoassisted bleaching of dyes utilizing TiO[2] and visible light, Chemosphere, 46 (2002) 561–570. 46. B.D. Cullity, S.R. Stock, Elements of X-Ray Diffraction, 3rd ed., Prentice-Hall, Inc., New Jersey, 2001. 47. M. Hema, A.Y. Arasi, P. Tamilselvi, R. Anbarasan, Titania nanoparticles synthesized by sol–gel technique, Chem. Sci. Trans., 2 (2013) 239–245. 48. M.M. Ba-Abbad, A.A.H. Kadhum, A.B. Mohamad, M.S. Takriff, K. Sopian, Synthesis and catalytic activity of TiO[2] nanoparticles for photochemical oxidation of concentrated chlorophenols under direct solar radiation, Int. J. Electrochem. Sci., 7 (2012) 4871–4888. 49. L. Cavigli, F. Bogani, A. Vinattieri, V. Faso, G. Baldi, Volume versus surface-mediated recombination in anatase TiO[2] nanoparticles, J. Appl. Phys., 106 (2009) 053516. 50. S. Yang, X. Yang, X. Shao, R. Niu, L. Wang, Activated carbon catalyzed persulfate oxidation of Azo dye acid orange 7 at ambient temperature, J. Hazard. Mater., 186 (2011) 659–666. 51. K.M. Reddy, S.V. Manorama, A.R. Reddy, Bandgap studies on anatase titanium dioxide nanoparticles, Mater. Chem. Phys., 78 (2002) 239–245. 52. S. Bashir, J. Liu, H. Zhang, X. Sun, J. Guo, Band gap evaluations of metal-inserted titania nanomaterials, J. Nanopart. Res., 15 (2013) 1572. 53. J. Guo, Interface science in nanoparticles: an electronic structure view of photon-in/photon-out soft-X-ray spectroscopy, Int. J. Quantum Chem., 109 (2009) 2714–2721. 54. A. Achilleos, E. Hapeshi, N.P. Xekoukoulotakis, D. Mantzavinos, D.F. Kassinos, Factors affecting diclofenac decomposition in water by UV-A/TiO[2] photo-catalysis, Chem. Eng. J., 161 (2010) 53–59. 55. K.M. Reza, A.S.W. Kurny, F. Gulshan, Parameters affecting the photocatalytic degradation of dyes using TiO[2]: a review, Appl. Water Sci., 7 (2017) 1569–1578. 56. E. Vulliet, J.M. Chovelon, C. Guillard, J.M. Herrmann, Factors influencing the photo-catalytic degradation of sulfonylurea herbicides by TiO[2] aqueous suspension, J. Photochem. Photobiol., A, 159 (2003) 71–79. 57. K. Bubacz, J. Choina, D. Dolat, A.W. Morawski, Methylene blue and phenol photo-catalytic degradation on nanoparticles of anatase TiO[2], Pol. J. Environ. Stud., 19 (2010) 685–691. 58. C.M. Ling, A.R. Mohamed, S. Bhatia, Performance of photocatalytic reactors using immobilized TiO[2] film for the degradation of phenol and methylene blue dye present in water stream, Chemosphere, 57 (2004) 547–554. 59. C. Guillard, H. Lachheb, A. Houas, M. Ksibi, E. Elaloui, J.M. Herrmann, Influence of chemical structure of dyes, of pH and of inorganic salts on their photocatalytic degradation by TiO[2] comparison of the efficiency of powder and supported TiO[2], J. Photochem. Photobiol., A, 158 (2003) 27–36. 60. B. Zielinska, J. Grzechulska, R.J. Kalenczuk, A.W. Morawski, The pH influence on photocatalytic decomposition of organic dyes over A11 and P25 titanium dioxide, Appl. Catal., B, 45 (2003) 61. S. Senthilkumaar, K. Porkodi, R. Gomathi, A.G. Maheswari, N. Manonmani, Sol–gel derived silver doped nanocrystalline titania catalysed photodegradation of methylene blue from aqueous solution, Dyes Pigm., 69 (2006) 22–30. 62. S.K. Kansal, N. Kaur, S. Singh, Photocatalytic degradation of two commercial reactive dyes in aqueous phase using nanophotocatalysts, Nanoscale Res. Lett., 4 (2009) 709–716. 63. K. Tanaka, K. Padermpole, T. Hisanaga, Photocatalytic degradation of commercial azo dyes, Water Res., 34 (2000) 327–333. 64. J. Grzechulska, A.W. Morawski, Photocatalytic decomposition of azo-dye acid black 1 in water over modified titanium dioxide, Appl. Catal., B, 36 (2002) 45–51. 65. E. Vulliet, J.M. Chovelon, C. Guillard, J.M. Herrmann, Factors influencing the photocatalytic degradation of sulfonylurea herbicides by TiO[2] aqueous suspension, J. Photochem. Photobiol., A, 159 (2003) 71–79. 66. L. Zhang, W. Zhang, R. Li, H. Zhong, Y. Zhao, Y. Zhang, X. Wang, Photo degradation of methyl orange by attapulgite–SnO[2]–TiO[2] nanocomposites, J. Hazard. Mater., 171 (2009) 294–300. 67. D. Chen, A.K. Ray, Photocatalytic kinetics of phenol and its derivatives over UV irradiated TiO[2], Appl. Catal., B, 23 (1999) 143–157. 68. S. Ameen, H.K. Seo, M.S. Akhtar, H.S. Shin, Novel graphene/polyaniline nano-composites and its photocatalytic activity toward the degradation of Rose Bengal dye, Chem. Eng. J., 210 (2012) 69. T. Sinha, M. Ahmaruzzaman, Photocatalytic decomposition behavior and reaction pathways of organic compounds using Cu nanoparticles synthesized via a green route, Photochem. Photobiol. Sci., 15 (2016) 1272–1281. 70. L. Zang, C.Y. Liu, X.M. Ren, Photochemistry of semiconductor particles. Part 4. Effects of surface condition on the photodegradation of 2,4-dichlorophenol catalysed by TiO[2] suspensions, J. Chem. Soc., Faraday Trans., 91 (1995) 917–923. 71. F.D. Mai, C.S. Lu, C.W. Wu, C.H. Huang, J.Y. Chen, C.C. Chen, Mechanisms of photocatalytic degradation of Victoria Blue R using nano-TiO[2], Sep. Purif. Technol., 62 (2008) 423–436. 72. B.D. Credico, I.R. Bellobono, M. D’Arienzo, D. Fumagalli, M. Redaelli, R. Scotti, F. Morazzon, Efficacy of the reactive oxygen species generated by immobilized TiO[2] in the photocatalytic degradation of diclofenac, Int. J. Photoenergy, 2015 (2015) 1–13. 73. J. Eriksson, J. Svanfelt, L. Kronberg, A photochemical study of diclofenac and its major transformation products, Photochem. Photobiol., 86 (2010) 528–532. 74. J. Zhang, Y. Nosaka, Mechanism of the OH radical generation in photocatalysis with TiO[2] of different crystalline types, J. Phys. Chem. C, 118 (2014) 10824–10832. 75. R.W. Matthews, Kinetics of photocatalytic oxidation of organic solutes over titanium dioxide, J. Catal., 111 (1988) 264–272. 76. R. Zepp, D. Crosby, A, Lewis Publs., CRC Press, Boca Raton, Florida, Chapter 22 (1994) 317–348. 77. S. Yang, X. Yang, X. Shao, R. Niu, L. Wang, Activated carbon catalyzed persulfate oxidation of azo dye acid orange 7 at ambient temperature, J. Hazard. Mater., 186 (2011) 659–666. 78. N. Guettaı, H.A. Amar, Photocatalytic oxidation of methyl orange in presence of titanium dioxide in aqueous suspension. Part II: Kinetics study, Desalination, 185 (2005) 439–448. 79. E. Kordouli, K. Bourikas, A. Lycourghiotis, C. Kordulis, The mechanism of azo-dyes adsorption on the titanium dioxide surface and their photocatalytic degradation over samples with various anatase/rutile ratios, Catal. Today, 252 (2015) 128–135. 80. I.K. Konstantinou, T.A. Albanis TiO[2]-assisted photocatalytic degradation of azo dyes in aqueous solution: kinetics and mechanistic investigations. A review, Appl. Catal., B, 49 (2004) 1–14. 81. A.F. Júnior, E.C. de Oliveira Lima, A.N. Miguel, P.R. Wells, Synthesis of nanoparticles of Co[x]Fe[(3−x)]O[4] by combustion reaction method, J. Magn. Magn. Mater., 308 (2007) 198–202. 82. M.A. Abu-Hassan, J.K. Kim, I.S. Metcalfe, D. Mantzavinos, Kinetics of low frequency sonodegradation of linear alkylbenzene sulfonate solutions, Chemosphere, 62 (2006) 749–755. 83. N.M. Mahmoodi, M. Arami, N.Y. Limaee, N.S. Tabrizi, Kinetics of heterogeneous photocatalytic degradation of reactive dyes in an immobilized TiO[2] photocatalytic reactor, J. Colloid Interface Sci., 295 (2006) 159–164. 84. G.M. Liu, X.Z. Li, J.C. Zhao, S. Horikoshi, H. Hidaka, Photooxidation mechanism of dye alizarin red in TiO[2] dispersions under visible illumination: an experimental and theoretical examination, J. Mol. Catal. A: Chem., 153 (2000) 221–229. 85. C. Galindo, P. Jacques, A. Kalt, Photodegradation of the aminoazobenzene acid orange 52 by three advanced oxidation processes: UV/H[2]O[2], UV/TiO[2] and VIS/TiO[2]: comparative mechanistic and kinetic investigations, J. Photochem. Photobiol. A, 130 (2000) 35–47.
{"url":"https://www.deswater.com/DWT_references/vol_114_references/114_2018_265.html","timestamp":"2024-11-03T00:42:38Z","content_type":"text/html","content_length":"19448","record_id":"<urn:uuid:9571a1c6-37e7-4a94-aae7-cb3443f8806a>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00656.warc.gz"}
12.2.4 Reflection at a clavichord tangent In order to obtain a satisfactory model for the modal damping in a clavichord string, we need to allow for a new energy dissipation mechanism in addition to the ones introduced in section 5.4. When the player’s finger holds the tangent in contact with the string, this does not make a rigid boundary condition on the string: small movement of the tangent can occur, which allows some energy to leak past the tangent, to be dissipated by the felt woven into the non-playing lengths of string. We will use a very simple model to investigate this leakage: an ideal flexible string, with the tangent/key/finger combination modelled as a mass $M$ and a damper (or dashpot) with strength $d$, as sketched in Fig. 1. Figure 1. Reflection and transmission of a wave on the string past a simple mass/dashpot model of the tangent, key and finger. We imagine an incident wave with frequency $\omega$ and unit amplitude arriving at the tangent from the left-hand side. Interaction with the tangent generates outgoing reflected and transmitted waves as sketched, with (complex) amplitudes $R$ and $S$ respectively. The transmitted wave will be dissipated by the felt, so that no waves arrive back at the tangent from the right-hand side. So for $x \ le 0$ the transverse displacement of the string is $$w(x,t)=e^{i \omega (t-x/c)} + R e^{i \omega (t+x/c)} \tag{1}$$ while for $x \ge 0$ $$w(x,t)=S e^{i \omega (t-x/c)} \tag{2}$$ where $c=\sqrt{T/m}$ is the wave speed on the string, $T$ being the tension and $m$ the mass per unit length. At $x=0$ we have two conditions, which will give us the two equations we need to solve for $R$ and $S$. First, we have the continuity condition: the string is not broken at $x=0$ so we must get the same answer for $w(0,t)$ from both expressions. This requires $$1+R=S . \tag{3}$$ Second, we have the equation of motion for the mass $M$: $$M \dfrac{\partial^2 w}{\partial t^2} = -d \dfrac{\partial w}{\partial t} + T\left[ \left. \dfrac{\partial w}{\partial x} \right|_{x=0+} – \left. \dfrac{\partial w}{\partial x} \right|_{x=0-} \ right] \tag{4}$$ so from equations (1) and (2) $$(-M \omega^2 + i \omega d)(1+R) = \dfrac{i \omega T}{c} (-S -R+1) . \tag{5}$$ Combining with equation (3) and rearranging gives $$R=-\dfrac{i \omega M + d}{2 Z_0 + i \omega M + d} \tag{6}$$ where $Z_0=T/c = \sqrt{T m}$ is the wave impedance of the string. Now from equation (6) of section 5.1.2 we know that the associated loss factor $\eta_t^{(n)}$ for the $n$th mode of the string is given by $$\eta_t^{(n)} \approx \dfrac{1 -|R|^2}{2 \pi n} , \tag{7}$$ so substituting equation (6) and simplifying we obtain $$\eta_t^{(n)} \approx \dfrac{2 Z_0}{\pi n} \left[\dfrac{Z_0 + d}{(2Z_0+d)^2 + \omega^2 M^2} \right] . \tag{8}$$ To obtain a complete model for the damping of the string, we need to combine this with the other loss factors we already know: energy loss at the other end of the string due to coupling with the clavichord soundboard is described by $\eta^{(n)}_{body}$ from section 5.1.2; and energy loss due to viscosity in the surrounding air is described by $\eta^{(n)}_{air}$ from section 5.4.5. Finally, we have energy loss internal to the string itself. For metal strings as on the clavichord, the internal damping of the material is very low, and it will turn out to be good enough to include a crude estimate by simply adding a small, constant background loss factor $\eta_{background} = 10^{-4}$. So the combined modal loss factor $\eta^{(n)}$ is given by $$\eta^{(n)} = \eta_t^{(n)} + \eta^{(n)}_{body} + \eta^{(n)}_{air} + \eta_{background} . \tag{9}$$ In order to obtain an estimate of $\eta^{(n)}_{body}$, we need the bridge admittance. Figure 2 shows the measured admittance of the clavichord soundboard, close to the position of the note chosen for analysis. It also shows, in a blue dashed curve, the corresponding admittance for a harpsichord that we will come to in a moment. Both soundboards, being plate-like, show a broadly horizontal trend. For our present purpose, it is enough to estimate the mean level of each: the clavichord is around –35 dB, the harpsichord around –45 dB. Figure 2. The measured bridge admittance of the tested clavichord (red line) and harpsichord (blue dashed line). The occasional narrow dips in both curves are the result of string resonance which are insufficiently damped, but these do not matter for the purposes of this section. Figure 3 shows a plot of the various loss factor contributions for a particular clavichord string, which had length $L$ = 0.99 m and diameter 0.48 mm. The new factor, $\eta_t^{(n)}$, is plotted in a red dash-dot line. This has been calculated using the values $M=10\mathrm{~g}$, $d=10\mathrm{~Ns/m}$, chosen to give a good fit to measurements to be shown in Fig. 4. The other contributions are shown in various types of broken blue line. The sum of all the blue contributions gives the solid blue curve; adding in $\eta_t^{(n)}$ gives the solid red curve. It can be seen that $\eta_t^{(n)}$ dominates at the lowest frequencies, causing a big divergence between red and blue curves. Note that $\eta^{(n)}_{body}$, shown in the dashed blue line, makes only a rather small contribution, which is why we can get away with a crude estimate of the bridge admittance. Figure 3. The loss factor contributions for the particular clavichord string used in the simulations. Red dash-dot line: $\eta_t^{(n)} $; blue dash-dot line: $\eta^{(n)}_{air}$; blue dashed line: $\ eta^{(n)}_{body} $; blue dotted line: $\eta_{background}$. The solid blue line is the sum of all the contributions plotted in blue; the solid red line is the result of adding in $\eta_t^{(n)} $. The red curve in Fig. 4 shows the Q-factors calculated by inverting the loss factor shown in the red curve in Fig. 3. This can be compared to the set of measured Q-factors, the red stars. The blue circles show corresponding measurements from the same note played on a harpsichord string with length 1.48 m and diameter 0.36 mm. The prediction for this string is shown in the blue curve. This is similar to the inverse of the blue curve in Fig. 3, but not quite the same because it has been calculated with the different length and diameter of the harpsichord string, and also with the different bridge admittance estimate. For both strings, the model captures the trend of the data very well. Figure 4. Measured and predicted Q-factors for the same note on a clavichord (red) and a harpsichord (blue). The discrete symbols are measured values. The red curve is the inverse of the red curve in Fig. 3. The blue curve is the inverse of the blue curve in Fig. 3, but it has been computed using the slightly different length and diameter of the actual harpsichord string.
{"url":"https://euphonics.org/12-2-4-reflection-at-a-clavichord-tangent/","timestamp":"2024-11-10T01:23:34Z","content_type":"text/html","content_length":"82606","record_id":"<urn:uuid:9a186930-d1e3-4f0b-a55d-76855132941a>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00789.warc.gz"}
48. 2 dice are rolled, one black and one white. How many rolls occur in which... 48. 2 dice are rolled, one black and one white. How many rolls occur in which... 48. 2 dice are rolled, one black and one white. How many rolls occur in which the value on the white die is greater than the value on the black die? 2 dice are rolled: one black and one white Now the total possibilities are 36 as shown below. In the values (i,j), let i be the outcome from white die and j be the outcome from black die. We need to find the cases in which i>j • There are 6 cases in which the outcomes are same (i=j). • Remaining 30 cases have equal distribution. In half of them i>j and in other half j>i. So the answer would be 15 We can also see the cases in which i>j So, these are the 15 cases in which i>j So, the answer is 15
{"url":"https://justaaa.com/statistics-and-probability/1307113-48-2-dice-are-rolled-one-black-and-one-white-how","timestamp":"2024-11-04T11:54:08Z","content_type":"text/html","content_length":"40306","record_id":"<urn:uuid:429454e7-87a0-4be0-9ab3-10c0f42f5168>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00074.warc.gz"}
21CS42 DAA Solved Previous year Question Paper PYQ's - VTU Updates 21CS42 DAA Solved Previous year Question Paper PYQ’s 21CS42 – Design and Analysis of Algorithm – DAA Solved Previous year question papers – PYQ’s MODULE – 1 1) What is an algorithm? Explain the criteria/properties to be satisfied by the algorithm 3) Explain the algorithm design and analysis process 4) Prove the following theorem if t1(n) ϵO((g1(n)) and t2(n) ϵ O(g2(n)) then t1(n)+ t2(n) ϵ O(max{g1(n),g2(n}). 5) The factorial function n! has value 1when n<=1 and value n *(n-1)! When n is >1. Write both a recursive and iterative algorithm to compute n! 6) List the following functions according to their order of growth from lowest to highest. State proper reasons, (n-2)! , 5 log (n+100)^10, 2^2n , 0.001n^4 +3n^3+1, ln^2 n, ^3√n, 3^n (n-2)!= Theta(n!) 5 log (n+100)^10 = theta(log n()) 2^2n = theta(4^n ) 0.001n^4 +3n^3+1 =theta(n^4 ) ln^2 n =theta(log^2 n ) ^3√n =Theta(n ^1/3 ) 3^n =theta(3^n) 7) Write an algorithm to find the uniqueness of elements in an array and give the mathematical analysis of this non-recursive algorithm with all steps. 8) Define algorithm with a specification of the writing algorithm 9) Write Tower of Hanoi algorithm and steps for analysis of recursive algorithm. Show the analysis of above algorithm. 10) Explain with example how the count variable is introduced in the program to find the number of steps required by a program to solve the problems. 11) Write an algorithm to find max element in the array of n elements. Give mathematical analysis of this algorithm. 12) Explain the general plan for analyzing the efficiency of recursive algorithm. Write the algorithm to find factorial of a given number. Derive its efficiency. 13) What are the various basic efficiency classes? Explain Big O, Big Omega and Big Theta asymptotic notations. 14) Write an algorithm for Selection sort, trace its working for the given elements and also analyze its efficiency. 15) Explain Brute Force String matching problem with an example. Write an algorithm for same and analyze its efficiency. 16) Design an algorithm for performing sequential / linear search and compute best case, worst case, and average case efficiency. Module 2 1) Explain divide and conquer technique with general algorithm with control abstract. 2) Write the control abstract for divide and conquer technique. 3) Solve the following recurrence relations: (i) T(n)=2T(n/2)+1 (ii) T(n)=T(n-1)+cn 4) State Master Theorem. Solve the recurrence relation using Master Theorem (i)T(n)=2T(n/2) + c n (ii) T(n)=2T(n/2)+1 5) Design an algorithm for performing merge sort. Analyze its time efficiency; apply the same to sort the following number. 4 9 0 -1 6 8 9 2 3 12 6) Design an algorithm for performing quick sort . Apply the same to solve the following set of numbers : 5 3 1 9 8 2 4 7 7) Sort the following keyword A,L,G,O,R,I,T,H,M by applying Quicksort method. 8) Apply quick sort algorithm to sort the list E,X,A,M,P,L,E in alphabetical order. Draw tree of recursive calls made 9) List the advantages and disadvantages of divide and conquer. 10) Explain the concept of divide and conquer . Write recursive algorithm to perform binary search on list of elements. 11) Compare straight forward method and divide and conquer method of finding max and min element of the list. 12) Show the number of elements comparison with example and proof of Binary search for Average, best and worst time case analysis. 13) Develop a recursive algorithm to find max and min element from the list. Illustrate with an Example. 14) Explain the decrease and conquer technique with its variations. 15) Design an algorithm for performing insertion sort. Analyze its time efficiency; apply the same to sort the following number. 89 45 68 90 29 34 17 16) Design an algorithm for DFS and BFS traversal methods. Comment on its efficiency. Apply the same for the given graph. 17) Differentiate between DFS and BFS. 18) Apply topological sort on the following graph using source removal and DFS based methods MODULE 3 (P1, P2, ………,P7) = (10,5,15,7,6,18,3) 2. Apply prim’s algorithm and kruskal’s algorithm method to find the minimum cost spanning tree to graph shown below. 3. Write an algorithm to solve single source shortest path problem. Apply the algorithm to the graph shown below. 4. Define heap. Write bottom-up construction algorithm. Construct heap for the list 1,8,6,5,3,7 using bottom-up algorithm and successive key insertion method. 5. Find the optimal solution to the knapsack instant n=7, m=15 using greedy method. 6. Find the minimum spanning tree using Kruskal’s algorithm. 7. Construct a Huffman code for the following data. Encode the text ABACABAD and decode 100010111001010. 8. Calculate the shortest distance and shortest path from vertex 5 to vertex 0 using Dijkstra’s algorithm. 9. State job sequencing with deadline problem. Find the solution generated by job sequencing problem for 7 jobs given profits 3,5,20,18,1,6,30 and deadlines 1,3,4,3,2,1,2 respectively. 10. Obtain a minimum cost-spanning tree for the graph below using Prim’s algorithm. 11. Sort the given list of number using Heap sort. 12. Explain the greedy criterion. Apply greedy method for the following instance of knapsack problem. Capacity of knapsack (M) = 5. 13. Construct a Huffman code for the following data and encode the test BADEC 14. Solve the below instance of single source shortest path problem with vertex “a” as the source 15. Apply Prim’s and Kruskal’s algorithm to the following instance to find minimum spanning tree. 16. Sort the below given lists by heapsort using the array representation of heaps. • 3,2,4,1,6,5 • 3,4,2,1,5,6 • H,E,A,P,S,O,R,T Module 4 1 Explain the general procedure to solve a multistage graph problem using backward approach with an example. 2. Explain multistage graph problem with example. Write an algorithm for multistage graph problem using forward approach. 3. Solve multistage graph problem using forward approach for the given graph consider vertex 1 as the source and vertex 12 as sink. 4. Write an algorithm for optimal binary search tree. Construct an optimal binary search tree for the following: Items A B C D Probabilities 0.1 0.2 0.4 0.3 5. Design Floyd’s algorithm to find shortest distances from all nodes to all other nodes. 6. Apply Floyd’s algorithm to find all pair shortest path for the given graph. 7. Apply Warshall’s algorithm to compute transitive closure for the graph below. 8. Define transitive closure of a graph. Write Warshall’s algorithm to compute transitive closure of a directed graph. Apply the same on the graph defined by the following adjacency matrix: 9. Using Dynamic Programming solve the below instance of knapsack problem. Capacity W=5 Item Weight Value 10. Solve the following travelling sales person problem represented as a graph shown in Fig below using dynamic programming. 11. Find the optimal tour for sales person using dynamic programming technique for the given graph and its corresponding edge length matrix. 12. Write an algorithm for Bellman Ford. Find the shortest path from node 1 to every other node in the graph using Bellman Ford algorithm. 13. Explain the concept of negative weight cycle in a directed graph. 14. Explain sorting by counting. 15. Write an algorithm and explain input enhancement in string matching using Harspool’s algorithm. Module 5 2. Discuss graph coloring problem. Find different solutions for 4 nodes and all possible 3 coloring problem. 3. Write a note on (i) Non deterministic algorithms (ii) LC branch and bound solution to solve 0/1 knapsack problem. 4. What are the two additional items required by Branch and Bound technique compared with backtracking. Solve the following assignment problem using branch and bound technique whose cost matrix for assigning four jobs to four persons are given. 5. Explain Subset sum problem with suitable example. 6. Explain the classes of NP hard and NP complete. 7. What is Hamiltonian circuit problem? What is the procedure to find Hamiltonian circuit of a graph ? 8. Apply the branch and bound algorithm to solve the travelling salesman problem for the graph below. 9. Apply backtracking technique to solve the below instance of the subset sum problem. S={5, 10, 12, 13, 15, 18} d=30 10. How the branch and bound technique is different from backtracking? Solve the following instance of knapsack problem using Branch and Bound technique. Give knapsack capacity = 10. 11. Define Hamiltonian cycle. Check whether the Hamiltonian cycle exists for the graph given below. 12. Apply backtracking to the problem of finding a Hamiltonian circuit in the graph shown below: 13. Define the following (08 Marks) • Class P • Class NP • NP Complete problem • NP Hard problem
{"url":"https://vtuupdates.com/pyqs/21cs42-daa-solved-previous-year-question-paper-pyqs/","timestamp":"2024-11-14T07:56:34Z","content_type":"text/html","content_length":"209171","record_id":"<urn:uuid:f17573fc-68ea-4315-bbc4-a52205b4aa4f>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00280.warc.gz"}
Find the interest and amount resided in the bank - FcukTheCode Find the interest and amount resided in the bank Ratik a young millionaire deposits $10000 into a bank account paying 7% simple interest per year. He left the money in for 5 years. He likes to predict the interest and the amount earned by him at the end of 5 years Can you help him to find the interest and amount resided in his bank acoount after 5 years? Functional Description: interest = (p * i * t) / 100 and amount = p + interest. where p is total principal, i is rate of interest per year, and t is total time in years. Input has three values representing Principle,Interest per year and Time. First Line: Print the interest earned for the principle amount. Second Line: Print the Total amount earned including interest. #include <stdio.h> int main() float p,i,interest,amount; int t; scanf("%f %f %d",&p,&i,&t); printf("Interest after %d Years = $%.2f",t,interest); printf("\nTotal Amount after %d Years = $%.2f",t,amount); return 0; 10000 7.8 5 Interest after 5 Years = $3900.00 Total Amount after 5 Years = $13900.00 Interest after 7 Years = $8190.00 Total Amount after 7 Years = $21190.00 12500 9.3 7 Interest after 7 Years = $8137.50 Total Amount after 7 Years = $20637.50 Morae Q!
{"url":"https://www.fcukthecode.com/ftc-cprogm-simple-interest-in-5-years-fcukthecode/","timestamp":"2024-11-09T09:48:07Z","content_type":"text/html","content_length":"151718","record_id":"<urn:uuid:2f4d7b04-bea2-4829-a57f-4e667e2ccc55>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00582.warc.gz"}
Brain Teaser: 3+3=4 Remove 3 Sticks To Fix The Equation | Matchstick Puzzles - EduViet Corporation Brain Teaser: 3+3=4 Remove 3 Sticks To Fix The Equation | Matchstick Puzzles In recent years, brain teasers stick puzzles have attracted the attention of many people who are actively looking for interesting puzzles to participate in and solve. These puzzles have become fascinating and popular topics. Essentially, matchstick puzzles fall into the category of rearrangement puzzles. As you delve into these puzzles, your tasks revolve around adjusting the position of the matchsticks to obtain a specific result, usually forming a valid equation. Brainteaser: 3+3=4 Remove 3 sticks to fix the equation | Matchstick Puzzle Matchstick puzzles place readers in situations that require the application of problem-solving skills. You are watching: Brain Teaser: 3+3=4 Remove 3 Sticks To Fix The Equation | Matchstick Puzzles In the picture above, the arrangement of matchsticks forms a mathematical equation: Clearly, this equation is flawed when the numbers are added. Your challenge is to correct the equation using matchsticks. There is a limited time limit, and you are bound by one rule: you can change the matchsticks. The job requires cognitive abilities and rapid analytical skills in such a short period of time. Solving this puzzle relies on careful attention to the image and keen observation. As a moderately complex challenge, those with sharp intelligence and impeccable attention to detail are prepared to solve it faster. The clock is ticking; the countdown begins. Examine the image carefully and identify the exact matchsticks that need to be repositioned to correct the equation. See more : (Updated) CSK vs RR Head to Head in IPL: Check Stats, Records and Results Logic matchstick puzzles are basically rearrangement puzzles where you need to rearrange the matchsticks arranged in shapes or equations to fix and solve the puzzle. For all those who wish to improve their IQ and logical problem solving skills, we have provided one such matchstick puzzle in this article. The final moment has arrived. time up. Can you solve the puzzle’s secret? Hopefully most people are successful and a few may still be thinking about it. Are you eager to reveal the solution? See more : Test Your Lateral Thinking Skills Find the Number 965 Within 10 Seconds The moment of revelation has arrived. The solution is revealed below! Brainteaser: 3+3=4 Remove 3 sticks to fix the equation | Matchstick Puzzle Solution This matchstick brain teaser puzzle presents quite a challenge and now we invite you to try to solve it. To correct the equation, you simply remove three matches. Take two matches from the number 3 and one from the plus sign. With this change, the equation becomes: And this configuration is indeed correct! Disclaimer: The above information is for general information purposes only. All information on this website is provided in good faith, but we make no representations or warranties, express or implied, as to the accuracy, adequacy, validity, reliability, availability or completeness of any information on this website. Source: https://truongnguyenbinhkhiem.edu.vn Category: Brain Teaser Leave a Comment
{"url":"https://truongnguyenbinhkhiem.edu.vn/brain-teaser-334-remove-3-sticks-to-fix-the-equation-matchstick-puzzles","timestamp":"2024-11-14T08:40:07Z","content_type":"text/html","content_length":"117803","record_id":"<urn:uuid:28283643-ce14-4d1e-9a13-f8b6ba5eb9c2>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00859.warc.gz"}
RoSELS: Road Surface Extraction for 3D Automotive LiDAR Point Cloud Sequence Dhvani Katkoria, Jaya Sreevalsan-Nair Road surface geometry provides information about navigable space in autonomous driving. Ground plane estimation is done on “road” points after semantic segmentation of three-dimensional (3D) automotive LiDAR point clouds as a precursor to this geometry extraction. However, the actual geometry extraction is less explored, as it is expensive to use all “road” points for mesh generation. Thus, we propose a coarser surface approximation using road edge points. The geometry extraction for the entire sequence of a trajectory provides the complete road geometry, from the point of view of the ego-vehicle. Thus, we propose an automated system, RoSELS (Road Surface Extraction for LiDAR point cloud Sequence). Our novel approach involves ground point detection and road geometry classification, i.e. frame classification, for determining the road edge points. We use appropriate supervised and pre-trained transfer learning models, along with computational geometry algorithms to implement the workflow. Our results on SemanticKITTI show that our extracted road surface for the sequence is qualitatively and quantitatively close to the reference trajectory. Paper Citation in Harvard Style Katkoria D. and Sreevalsan-Nair J. (2022). RoSELS: Road Surface Extraction for 3D Automotive LiDAR Point Cloud Sequence. In Proceedings of the 3rd International Conference on Deep Learning Theory and Applications - Volume 1: DeLTA, ISBN 978-989-758-584-5, pages 55-67. DOI: 10.5220/0011301700003277 in Bibtex Style author={Dhvani Katkoria and Jaya Sreevalsan-Nair}, title={RoSELS: Road Surface Extraction for 3D Automotive LiDAR Point Cloud Sequence}, booktitle={Proceedings of the 3rd International Conference on Deep Learning Theory and Applications - Volume 1: DeLTA,}, in EndNote Style TY - CONF JO - Proceedings of the 3rd International Conference on Deep Learning Theory and Applications - Volume 1: DeLTA, TI - RoSELS: Road Surface Extraction for 3D Automotive LiDAR Point Cloud Sequence SN - 978-989-758-584-5 AU - Katkoria D. AU - Sreevalsan-Nair J. PY - 2022 SP - 55 EP - 67 DO - 10.5220/0011301700003277
{"url":"http://scitepress.net/PublishedPapers/2022/113017/","timestamp":"2024-11-05T19:06:57Z","content_type":"text/html","content_length":"7443","record_id":"<urn:uuid:db052da1-2823-49aa-9a95-731f60cb23bd>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00115.warc.gz"}
Rapid sample size calculations for a defined likelihood ratio test-based power in mixed effects models Camille Vong, Martin Bergstrand, Mats O. Karlsson Department of Pharmaceutical Biosciences, Uppsala University, Uppsala, Sweden Objectives: Efficient power calculation methods have previously been suggested for Wald test based inference in mixed effects models (1) but for Likelihood ratio test (LRT) based hypothesis testing, the only available alternative has been to perform computer-intensive multiple simulations and re-estimations (2). For correct power calculations, a type 1 error assessment to calibrate the significance criterion is often needed for small sample sizes, due to a difference between the actual and the nominal (chi squared) significance criteria(3). The proposed method is based on the use of individual Objective Function Values (iOFV) and aims to provide a fast and accurate prediction of the power and sample size relationship without any need for adjustment of the significance Methods: The principle of the iOFV sampling method is as follows: (i) a large dataset (e.g. 1000 individuals) is simulated with a full model and subsequently the full and reduced models are re-estimated with this data set, (ii) iOFVs are extracted and for each subject the difference in iOFV between the full and reduced models is computed (ΔiOFV), (iii) ΔiOFVs are sampled according to the design for which power is to be calculated and a starting sample size (N), (iv) the ΔiOFVs sum for each sample is calculated (∑ΔiOFVs), (v) steps iii and iv are repeated many times, (vi) the percentage of ∑ΔiOFVs greater than the significance criterion (e.g. 3.84 for one degree of freedom and α=0.05) is taken as the power for sample size N, (vii) steps iii-vi are repeated with increasing N to provide the power at all sample sizes of interest. The power versus sample size relationship established via the iOFV method was compared to traditional assessment of model-based power (200 simulated datasets) for a selection of sample sizes. Two examples were investigated, a one-compartment IV-Bolus PK model with sex as a covariate on CL (3) and a more complex FPG-HbA1c model with a drug effect on kout for FPG (4). Results: Power generated for both models displayed concordance between the suggested iOFV method and the nominal power. For 90% power, the difference in required sample size was in all investigated cases less than 10%. To maintain a 5% type 1 error a significance criteria calibration at each sample size was needed for the PK model example and the traditional method but not for power assessment with the iOFV sampling method. In both cases, the iOFV method was able to estimate the entire power vs. sample size relationship in less than 1% of the time required to estimate the power at a single sample size with the traditional method. Conclusions: The suggested method provides a fast and still accurate prediction of the power and sample size relationship for likelihood ratio test based hypothesis testing in mixed effects models. The iOFV sampling method is general and mimics more closely than Wald-test based methods the hypothesis tests that are typically used to establish significance. [1] Ogungbenro K, Aarons L, Graham G. Sample size calculations based on generalized estimating equations for population pharmacokinetic experiments. J Biopharm Stat2006;16(2):135-50. [2] Ette EI, Roy A. Designing population pharmacokinetic studies for efficient parameter estimation. . In: Ette EI, Williams PJ, editors. Pharmacometrics: the Science of Quantitative Pharmacology. Hoboken: John Wiley & Sons; 2007. p. 303-44. [3] Wahlby U, Jonsson EN, Karlsson MO. Assessment of actual significance levels for covariate effects in NONMEM. J Pharmacokinet Pharmacodyn2001 Jun;28(3):231-52. [4] Hamren B, Bjork E, Sunzel M, Karlsson M. Models for plasma glucose, HbA1c, and hemoglobin interrelationships in patients with type 2 diabetes following tesaglitazar treatment. Clin Pharmacol Ther2008 Aug;84(2):228-35. Reference: PAGE 19 (2010) Abstr 1863 [www.page-meeting.org/?abstract=1863] Oral Presentation: Methodology
{"url":"https://www.page-meeting.org/default.asp?abstract=1863","timestamp":"2024-11-09T08:10:45Z","content_type":"text/html","content_length":"21589","record_id":"<urn:uuid:d78c7e58-d75b-4ba3-ac51-453a2225feca>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00812.warc.gz"}
General formula for RMS – root mean square F_{RMS} = \sqrt{\frac{1}{T}\int_{0}^{T}{f^{2}(t)\cdot dt}} The RMS value interpretation based on the electrical current root mean square value. simple AC electric circuit W = \int{p \cdot dt} W = \int{u \cdot i \cdot dt} W = R\cdot \int{ i^{2} \cdot dt} W – work p – temporal electric power p = u \cdot i u – temporal electric voltage i – temporal electric current If the electrical current is periodical then W_{T} = R\cdot \int_{0}^{T}{ i^{2} \cdot dt} Root mean square value of the alternate electric current is an equivalent direct electric current which will produce exactly same amount of heat. R\cdot \int_{0}^{T}{ i^{2} \cdot dt} = R \cdot I^{2} \cdot T Dividing both sides of the above equation by resistance R \int_{0}^{T}{ i^{2} \cdot dt} = I^{2} \cdot T Next swapping sides in order to find relation for the DC electrical current I I^{2} \cdot T = \int_{0}^{T}{ i^{2} \cdot dt} Finally following is received Root mean square value for the alternate electric current I_{RMS} = \sqrt{\frac{1}{T}\cdot \int_{0}^{T}{ i^{2} \cdot dt}} Root mean square value for the alternate electric voltage U_{RMS} = \sqrt{\frac{1}{T}\cdot \int_{0}^{T}{ u^{2} \cdot dt}} Analysis of an AC electrical circuit A considered below AC electric circuit is composed of an AC voltage source, a resistor, a capacitor, an AC current source and an inductive coil. The nodal analysis method is being applied to calculate currents in the circuit’s branches. It is recalled that in the nodal analysis method it is always assumed that an electrical potential of one of nodes is equal to 0. Since the considered electric circuit is an AC circuit following formulas are to be applied: \underline{Z} = \frac{1}{\underline{Y}} \underline{Y} = \frac{1}{\underline{Z}} Alternate current electric circuit \underline{I}_1 \sum{(\underline{I}_s)_a} = \underline{Y}_{R1C1} \cdot \underline{V}_{s1} + \underline{I}_{s2} = \underline{V}_a \cdot ( \underline{Y}_{R1C1} + \underline{Y}_{L1} ) - \underline{V}_b \cdot ( \underline{Y}_{R1C1} + \underline{Y}_{L1} ) \underline{Y}_{R1C1} = \frac{1}{R1 - j \cdot \frac{1}{\omega \cdot C1}} \underline{Y}_{L1} = \frac{1}{\omega \cdot L1} The further calculations related to the present example are available – Node voltage method example 2. Nodal analysis Nodal analysis is one of methods used for electrical networks analysis. Nodal analysis is based on Kirchhoff’s current law. Main idea of this method is to calculate electrical potentials of every node. This will allow to calculate voltages in branches since voltage is a difference of potentials. This approach has one rule which requires to assume that potential of one chosen node has be equal zero volts. Symbolically this chosen node is connected to the ground on electrical diagram.
{"url":"http://www.mbstudent.com/category/electrical-engineering/ac-circuits/","timestamp":"2024-11-12T05:47:06Z","content_type":"application/xhtml+xml","content_length":"48972","record_id":"<urn:uuid:71c6d018-7b46-4101-b747-b4b391970dc7>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00088.warc.gz"}
Fifteen Papers in Complex Analysissearch Item Successfully Added to Cart An error was encountered while trying to add the item to the cart. Please try again. Please make all selections above before adding to cart Fifteen Papers in Complex Analysis Hardcover ISBN: 978-0-8218-3130-4 Product Code: TRANS2/146 List Price: $99.00 MAA Member Price: $89.10 AMS Member Price: $79.20 eBook ISBN: 978-1-4704-3357-4 Product Code: TRANS2/146.E List Price: $89.00 MAA Member Price: $80.10 AMS Member Price: $71.20 Hardcover ISBN: 978-0-8218-3130-4 eBook: ISBN: 978-1-4704-3357-4 Product Code: TRANS2/146.B List Price: $188.00 $143.50 MAA Member Price: $169.20 $129.15 AMS Member Price: $150.40 $114.80 Click above image for expanded view Fifteen Papers in Complex Analysis Hardcover ISBN: 978-0-8218-3130-4 Product Code: TRANS2/146 List Price: $99.00 MAA Member Price: $89.10 AMS Member Price: $79.20 eBook ISBN: 978-1-4704-3357-4 Product Code: TRANS2/146.E List Price: $89.00 MAA Member Price: $80.10 AMS Member Price: $71.20 Hardcover ISBN: 978-0-8218-3130-4 eBook ISBN: 978-1-4704-3357-4 Product Code: TRANS2/146.B List Price: $188.00 $143.50 MAA Member Price: $169.20 $129.15 AMS Member Price: $150.40 $114.80 • American Mathematical Society Translations - Series 2 Volume: 146; 1990; 112 pp MSC: Primary 32; 14; Secondary 30; 31; 35 The papers in this volume range over a variety of topics in complex analysis, including holomorphic and entire functions, integral representations, the local theory of residues, complex manifolds, singularities, and CR structures. □ Chapters □ L. A. Aĭzenberg — Multidimensional analogues of Carleman’s formula with integration over boundary sets of maximal dimension □ A. B. Aleksandrov — Blaschke’s condition and the zeros of bounded holomorphic functions □ Ya. Yu. Gaĭdis and S. G. Gindikin — On an algebraic cone in $C^6$ connected with rational curves □ S. G. Gindikin and G. M. Khenkin — The Cauchy-Fantappiè formula on projective space □ V. A. Kakichev — Application of the Fourier method to the solution of boundary value problems for functions analytic in disk bidomains □ A. L. Onishchik — On the topology of certain complex homogeneous spaces □ S. I. Pinchuk and A. Yu. Pushnikov — CR-mappings of manifolds of codimension 2 □ L. I. Ronkin — Entire functions on $C^n$ that are quasipolynomials with respect to one of the variables □ A. Sadullaev and P. V. Degtyar′ — Defect hyperplanes of holomorphic mappings □ N. N. Tarkhanov — On Poincaré duality for elliptic complexes □ V. P. Khavin — A remark on Taylor series of harmonic functions □ A. K. Tsikh — Use of residues to compute the sum of the squares of the Taylor coefficients of a rational function of two variables □ A. P. Yuzhakov — On the separation of analytic singularities and the decomposition of holomorphic functions of $n$ variables into partial fractions □ B. I. Odvirko-Budko — Some multidimensional estimates of conditional stability in the problem of analytic continuation from a subdomain of the domain of regularity □ V. V. Rabotin — A counterexample to two problems of Kobayashi • Permission – for use of book, eBook, or Journal content • Book Details • Table of Contents • Requests Volume: 146; 1990; 112 pp MSC: Primary 32; 14; Secondary 30; 31; 35 The papers in this volume range over a variety of topics in complex analysis, including holomorphic and entire functions, integral representations, the local theory of residues, complex manifolds, singularities, and CR structures. • Chapters • L. A. Aĭzenberg — Multidimensional analogues of Carleman’s formula with integration over boundary sets of maximal dimension • A. B. Aleksandrov — Blaschke’s condition and the zeros of bounded holomorphic functions • Ya. Yu. Gaĭdis and S. G. Gindikin — On an algebraic cone in $C^6$ connected with rational curves • S. G. Gindikin and G. M. Khenkin — The Cauchy-Fantappiè formula on projective space • V. A. Kakichev — Application of the Fourier method to the solution of boundary value problems for functions analytic in disk bidomains • A. L. Onishchik — On the topology of certain complex homogeneous spaces • S. I. Pinchuk and A. Yu. Pushnikov — CR-mappings of manifolds of codimension 2 • L. I. Ronkin — Entire functions on $C^n$ that are quasipolynomials with respect to one of the variables • A. Sadullaev and P. V. Degtyar′ — Defect hyperplanes of holomorphic mappings • N. N. Tarkhanov — On Poincaré duality for elliptic complexes • V. P. Khavin — A remark on Taylor series of harmonic functions • A. K. Tsikh — Use of residues to compute the sum of the squares of the Taylor coefficients of a rational function of two variables • A. P. Yuzhakov — On the separation of analytic singularities and the decomposition of holomorphic functions of $n$ variables into partial fractions • B. I. Odvirko-Budko — Some multidimensional estimates of conditional stability in the problem of analytic continuation from a subdomain of the domain of regularity • V. V. Rabotin — A counterexample to two problems of Kobayashi Permission – for use of book, eBook, or Journal content Please select which format for which you are requesting permissions.
{"url":"https://bookstore.ams.org/TRANS2/146","timestamp":"2024-11-12T23:39:42Z","content_type":"text/html","content_length":"114372","record_id":"<urn:uuid:61736f07-496f-44e8-a0f1-3e156e7e3c3e>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00598.warc.gz"}
[UPDATED] GATE PHYSICS Syllabus pdf free download [2024] [UPDATED] GATE PHYSICS Syllabus pdf free download GATE Physics Syllabus :- Section 1: Mathematical Physics Linear vector space: basis, orthogonality and completeness; matrices; vector calculus; linear differential equations; elements of complex analysis: CauchyRiemann conditions, Cauchy’s theorems, singularities, residue theorem and applications; Laplace transforms, Fourier analysis; elementary ideas about tensors: covariant and contravariant tensor, Levi-Civita and Christoffel symbols. Section 2: Classical Mechanics D’Alembert’s principle, cyclic coordinates, variational principle, Lagrange’s equation of motion, central force and scattering problems, rigid body motion; small oscillations, Hamilton’s formalisms; Poisson bracket; special theory of relativity: Lorentz transformations, relativistic kinematics, mass‐energy equivalence. Section 3: Electromagnetic Theory Solutions of electrostatic and magnetostatic problems including boundary value problems; dielectrics and conductors; Maxwell’s equations; scalar and vector potentials; Coulomb and Lorentz gauges; Electromagnetic waves and their reflection, refraction, interference, diffraction and polarization; Poynting vector, Poynting theorem, energy and momentum of electromagnetic waves; radiation from a moving charge. Section 4: Quantum Mechanics Postulates of quantum mechanics; uncertainty principle; Schrodinger equation; one-, two- and three-dimensional potential problems; particle in a box, transmission through one dimensional potential barriers, harmonic oscillator, hydrogen atom; linear vectors and operators in Hilbert space; angular momentum and spin; addition of angular momenta; time independent perturbation theory; elementary scattering theory. Section 5: Thermodynamics and Statistical Physics Laws of thermodynamics; macrostates and microstates; phase space; ensembles; partition function, free energy, calculation of thermodynamic quantities; classical and quantum statistics; degenerate Fermi gas; black body radiation and Planck’s distribution law; Bose‐Einstein condensation; first and second order phase transitions, phase equilibria, critical point. Section 6: Atomic and Molecular Physics Spectra of one‐ and many‐electron atoms; LS and jj coupling; hyperfine structure; Zeeman and Stark effects; electric dipole transitions and selection rules; rotational and vibrational spectra of diatomic molecules; electronic transition in diatomic molecules, Franck‐Condon principle; Raman effect; NMR, ESR, X-ray spectra; lasers: Einstein coefficients, population inversion, two and three level systems. Section 7: Solid State Physics & Electronics Elements of crystallography; diffraction methods for structure determination; bonding in solids; lattice vibrations and thermal properties of solids; free electron theory; band theory of solids: nearly free electron and tight binding models; metals, semiconductors and insulators; conductivity, mobility and effective mass; optical, dielectric and magnetic properties of solids; elements of superconductivity: Type-I and Type II superconductors, Meissner effect, London equation. Semiconductor devices: diodes, Bipolar Junction Transistors, Field Effect Transistors; operational amplifiers: negative feedback circuits, active filters and oscillators; regulated power supplies; basic digital logic circuits, sequential circuits, flip‐flops, counters, registers, A/D and D/A conversion. Section 8: Nuclear and Particle Physics Nuclear radii and charge distributions, nuclear binding energy, Electric and magnetic moments; nuclear models, liquid drop model: semi‐empirical mass formula, Fermi gas model of nucleus, nuclear shell model; nuclear force and two nucleon problem; alpha decay, beta‐decay, electromagnetic transitions in nuclei; Rutherford scattering, nuclear reactions, conservation laws; fission and fusion; particle accelerators and detectors; elementary particles, photons, baryons, mesons and leptons; quark model.
{"url":"https://engineeringinterviewquestions.com/gate-physics-syllabus-pdf-free-download/","timestamp":"2024-11-04T23:37:32Z","content_type":"text/html","content_length":"43556","record_id":"<urn:uuid:e3f2ea22-66b6-4ff9-9646-ae394474377f>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00838.warc.gz"}
Aurora Translation Helper I am releasing the Aurora Translation Helper by SpkLeader for use in translating Aurora 0.1a.1. Aurora 0.1a.1 Translation Helper by SpkLeader This batch file was put together in an effort to make the translation process much easier. To use this file, follow these steps: 1) Copy "xuipkg.exe", "resxloc.exe" and "resx2bin.exe" files from your Xbox 360 SDK into the /Bin folder. 2) Copy the Aurora 0.1a.1 skin file "default.xzp" into the /Skins folder. 3) Rename the 'EN-US.xml' file to one of the supported langauge codes below (ex. 'PT-BR.xml' for 4) Translate all of the strings in the xml file to the appropriate language 5) Run 'translate.cmd' 6) After running translate.cmd, a /Final folder will be created containing the new files. 7) Copy all of the /Final folder into your root Aurora directory on your xbox. 8) Test your translation running in the dash for cut off strings 9) PM your completed XML to MaesterRowen on www.realmodscene.com to have it reviewed and included in upcoming language updates. If you have your language complete, you can either share your XML on the forums and/or submit them to Phoenix so that they can be included in as an official release update. Phoenix will not accept modifications to Default.xzp (skin) as part of your translation. Strings must be translated to fit within the space allowed. Aurora's default skin is designed around a consistent look and changing margins and padding is something that we want to avoid. As of Aurora 0.1a.1, please refer to the list of supported languages below to see what languages are available for translation. The ones that have been completed and accepted by Phoenix and are either already included or will be included in next update have been marked. Supported Languages: [S:"en-US", // English:S] (included since always) "ja-JP", // Japanese [S:"de-DE", // German:S] (included since 0.1a1) [S:"fr-FR", // French:S] (included since 0.1a1) [S:"es-ES", // Spanish:S] (included since 0.1a1) [S:"it-IT", // Italian:S] (included since 0.1a1) "ko-KR", // Korean "zh-CHT", // Traditional Chinese [S:"pt-BR", // Portuguese:S] (included since 0.1a1) [S:"pl-PL", // Polish:S] (included since 0.2b) [S:"ru-RU", // Russian:S] (included since 0.2b) [S:"sv-SE", // Swedish:S] (included since 0.2b) [S:"tr-TR", // Turkish:S] (included since 0.1a1) "nb-NO", // Norwegian [S:"nl-NL", // Dutch:S] (included since 0.2b) [S:"zh-CHS" // Simplified Chinese:S] (will be included in 0.3b) Aurora 0.1a.1 Translation Helper.rar Edited by Swizzy Maybe you can add Finnish to supported language, I'm almost done with translation to Finnish. Now I need just wait for support Maybe you can add Finnish to supported language, I'm almost done with translation to Finnish. Now I need just wait for support you need to supply the Locale ID for that, which is probably fi-FI MaesterRowen 14 Maybe you can add Finnish to supported language, I'm almost done with translation to Finnish. Now I need just wait for support As far as Aurora 0.1a.1 goes, we only plan to support the languages available through the official dash. At a later date in a future version, we might come up with a good way to support and change more languages; however, the EN-US.xml file is going to change a lot by then, so translating now won't do you much good. As far as Aurora 0.1a.1 goes, we only plan to support the languages available through the official dash. At a later date in a future version, we might come up with a good way to support and change more languages; however, the EN-US.xml file is going to change a lot by then, so translating now won't do you much good. Okay, thanks for information. I have now completed my translation so I'll just wait for possibly coming support. MatteIta 132 I corret some text and"bugs" on my italian's translation,i would test it before send again to you but folder "final\media\locales\it-it\" is empty, no "AuroraLang[iT-IT].xus",why? EDIT: i'm tring to translate a new EN-US.xml maybe i've canceled some line. PeZeD 6 if you had the package before the upadate there is a missing a line to compil it well , <data name="DynamicStrings.IDS_LOCALE_TRANSLATIONBY"><value> </value></data> you have to add it after line 464 (you can check in the en-en.xml provided here) LarsOss 0 I want to help to translate to dutch, but i want to know if Notepad++ is the right program to translate the file in. I want to help to translate to dutch, but i want to know if Notepad++ is the right program to translate the file in. You can use it, you can also use the Dashlaunch Translation tool i wrote ages ago... it'll load and save the files in a compatible format I can translate it to Polish but I have a request, could someone share just the files from the 360 SDK or a link to a working SDK because I searched it and I found an installer but it wad corrupted or something. Thanks in advance and sorry if that is a stupid request. I can translate it to Polish but I have a request, could someone share just the files from the 360 SDK or a link to a working SDK because I searched it and I found an installer but it wad corrupted or something. Thanks in advance and sorry if that is a stupid request. http://www.xbox360iso.com/xbox360-xdk-20675-t344997.html <--- this thread ussually contains atleast 1 link to a working version... EDIT: Sorry, I found it but the hosting is terrible, if aomwone is able to upload the files I would be really happy... I saw that thread before and checked the links to setups but any of the links weren't working. I downloaded something called recovery but I think that's not the thing that I need, am I right? It's just like a addon or something to the SDK, it asks me the path of SDK to install it there. Do you have it installed, would be nice if you or some other person could upload just the files needed or mirror a working setup. Yeah, recovery is used to basically reset a XDK unit (console/hardware) you need the XenonSDKSetup_####....exe or whatever it's called... I do have it installed, but... i'm working on something to make translations easier... i might include all the files required to compile the skin/translation for testing with the release, i'm not sure yet... until then, you can etheir just do the translation manually with notepad, notepad++ or whatever text editor you prefer... you can also ask someone to compile it for you, you only need the xdk to compile the translation for testing, and you're not likely to be needing to make alot of compilations Yeah, recovery is used to basically reset a XDK unit (console/hardware) you need the XenonSDKSetup_####....exe or whatever it's called... I do have it installed, but... i'm working on something to make translations easier... i might include all the files required to compile the skin/translation for testing with the release, i'm not sure yet... until then, you can etheir just do the translation manually with notepad, notepad++ or whatever text editor you prefer... you can also ask someone to compile it for you, you only need the xdk to compile the translation for testing, and you're not likely to be needing to make alot of compilations Yup, it may be hard to fit the worlds but I'll try to do it the way you said. Wish me luck ;D Yup, it may be hard to fit the worlds but I'll try to do it the way you said. Wish me luck ;D I've released the tool, but i didn't include the files you need to compile the skin for legal reasons... You can find the tool here: http://www.realmodscene.com/index.php?/topic/3753-aurora-translation-tool/ I've released the tool, but i didn't include the files you need to compile the skin for legal reasons... You can find the tool here: http://www.realmodscene.com/index.php?/topic/3753-aurora-translation-tool/ Just when I'm in half of the work with translating. If I have any problem I'll start from sctatch using your new tool. I wanted to start to translate Aurora into Russian but idk whehe can i find files from 1st step. Can u help me please? I wanted to start to translate Aurora into Russian but idk whehe can i find files from 1st step. Can u help me please? http://www.realmodscene.com/index.php?app=core&module=attach&section=attach&attach_id=1159 <--- this file contains everything you need to get started... To compile it (for testing) you need to etheir send it to someone like me whom have the XDK or if you have the XDK/can get hold of it you just use the compile function =) http://www.realmodscene.com/index.php?app=core&module=attach&section=attach&attach_id=1159 <--- this file contains everything you need to get started... To compile it (for testing) you need to etheir send it to someone like me whom have the XDK or if you have the XDK/can get hold of it you just use the compile function =) Ok, SO i can do it w/o XDK or SDK. Of ci wanted to test it by myself but i cant find SDk. i found tons version of XDK but it wont install without SDK Ok, SO i can do it w/o XDK or SDK. Of ci wanted to test it by myself but i cant find SDk. i found tons version of XDK but it wont install without SDK You need Visual Studio 2010 installed to install the XDK... what you've found is probably Recovery, which isn't what you'd want... you want the bigger file (ussually around 1.4 - 1.6GB) but yes, you can use my tool without it to make the translation, to check that it looks ok and perhaps change the font size so that things fit better on screen you'll need to compile it, for that you can ask someone to do it for you or get hold of the XDK... the easier way i suppose is to have someone compile for you... I'm generally available on IRC, so if you want me to compile for you, just send me a message there and i'll help you out You need Visual Studio 2010 installed to install the XDK... what you've found is probably Recovery, which isn't what you'd want... you want the bigger file (ussually around 1.4 - 1.6GB) but yes, you can use my tool without it to make the translation, to check that it looks ok and perhaps change the font size so that things fit better on screen you'll need to compile it, for that you can ask someone to do it for you or get hold ou.of the XDK... the easier way i suppose is to have someone compile for you... I'm generally available on IRC, so if you want me to compile for you, just send me a message there and i'll help you out Ok I've just translated 150/943 so, when I finish it I'll contact you. Btw, in advance, what's IRC? Thanks a lot Ok I've just translated 150/943 so, when I finish it I'll contact you. Btw, in advance, what's IRC? Thanks a lot IRC is Internet Relay Chat, it's where everyone high-tech spend their time speaking to other high-tech ppl you can always send me a pm here with a link to your xml, and i'll respond with a link to the skin compiled one... i can also adjust the size for you once you've got the translation done • 1
{"url":"https://www.realmodscene.com/index.php?/topic/3706-aurora-translation-helper/&tab=comments#comment-28153","timestamp":"2024-11-06T19:01:38Z","content_type":"text/html","content_length":"360197","record_id":"<urn:uuid:88c4de68-ea2a-484c-8896-1cd33eb7bfed>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00234.warc.gz"}
Capital Budgeting Decision Techniques and Analysis Capital budgeting is one of the processes of deciding which long-term capital expenditures are worth a company’s money based on their ability to benefit the company in the long run. Learn more about the importance of capital budgeting in business and also how to use the techniques in decision-making. What is Capital Budgeting? Capital budgeting is a series of techniques or processes for deciding when to invest in projects. For example, one can use capital budgeting techniques to evaluate a potential investment in a new factory, manufacturing line, or computer system. Basically, capital Budgeting is a decision-making technique in which an organization prepares and decides any long-term investment whose cash flow returns are expected to be earned beyond a year. Generally, an organization can make any of the following investment decisions: • Expansion • Purchasing • Substitute • Brand-new Product • R&D • Significant Publicity Campaign • Investment in social welfare Capital Budgeting Decision Analysis The capital budgeting decision-making process basis is on determining if the projects and expenditure areas are worth financing in cash from the company’s capitalization framework of debt, equity, and earnings – or not. Techniques for Capital Budgeting Decision Analysis The following are the five main techniques or processes one can use for capital budgeting decision analysis in order to choose the viable investment: #1. Payback Period The payback period is the number of years it takes to recover the original expense of the investment – the cash outflow. So, this is to say that the faster the payback period, the better. • Provides a rough estimate of liquidity. • Provides some details about the investment’s risk. • The calculation is easy. #2. Discounted Payback Period This is one of the most important cash budgeting techniques. • It takes into account the time value of capital. • Using the cost of money considers the risk in the project cash flows. #3. Net Present Value The net present value (NPV) as a cash budgeting technique is the amount of the present values of all projected cash flows in the event that a project is undertaken. NPV = CF0 + CF1/(1+k)1+ . . . + CFn/(1+k)n • CF0 = Initial Investment • CFn = AfterTax Cash Flow • K = Required Rate of Return The required rate of return is typically the Weight Average Cost of Capital (WACC) – which includes both debt and equity rates as the total capital. • It takes into account the time value of capital. • Takes into account all of the project’s cash flows • Using the cost of money considers the risk in the project cash flows. • Indicates whether the investment will increase the value of the project or the business. #4. Internal Rate of Return (IRR) The IRR as a cash budgeting technique is the discount rate applied when the present value of the projected incremental cash inflows equals the project’s initial expense. Specifically, when PV(Inflows) = PV (Outflows) • It takes into account the time value of capital. • Takes into account all of the project’s cash flows • Using the cost of money considers the risk in the project cash flows. • Indicates whether the investment will increase the value of the project or the business. #5. Profitability Index The Profitability Index as a cash budgeting technique calculation is by dividing the Present Value of a Project’s potential cash flows by the initial cash outlay. PI = PV of Future Cash Flow / CF0 So, in which case, The initial investment is CF0. They often refer to this ratio as the Profit Investment Ratio (PIR) or the Value Investment Ratio (VIR). • It takes into account the time value of capital. • Takes into account all of the project’s cash flows • Using the cost of money considers the risk in the project cash flows. • Indicates whether the investment will increase the value of the project or the business. • When capital is scarce, it is useful for ranking and also in selecting ventures. Cash Budgeting Examples Cash Budgeting Example #1 An organization is considering two projects before deciding on one. The following are the predicted cash flows. The company’s WACC is 10%. So, using the more traditional capital budgeting decision methods, let us compute and determine which project will be chosen over the other. The net present value (NPV) of Project A is $1.27. The net present value (NPV) Project B is $1.30. Project A’s internal rate of return is 14.5%. Project B’s internal rate of return is 13.1%. Since the net present value for both ventures is so similar, making a decision here is difficult. As a result, we choose the following approach to determine the rate of return on investment in each of the two ventures. This now indicates that Project A would generate higher returns (14.5 percent) than Project B, which is producing decent but lower returns than Project A. As a result, we can choose Project A over Project B. Cash Budgeting Examples #2 When choosing a project based on the Payback period, we must consider the inflows each year and the year in which the outflow is offset by the inflows. There are now two approaches for calculating the payback duration based on cash inflows that can be even or different. Project A’s payback period After ten years, the inflow has remained constant at $100 million. Since Project A represents a constant cash flow, the payback period is a measure of Initial Investment / Net Cash Inflow. As a result, it will take roughly ten years to complete Project A to recover the initial investment. Project B’s payback period Adding in the inflows, the $1000 million investment is covered in four years. Project B, on the other hand, has erratic cash flows. If you add up the yearly inflows, you can easily determine which year the investment and returns are near. As a result, the initial investment criteria for Project B is met in the fourth year. In comparison, Project A takes longer to produce any benefits for the entire enterprise, so Project B should be chosen over Project A. Cash Budgeting Example #3 Consider a project with a $100,000 initial investment. Using the Discounted Payback Period process, so we can determine whether or not the project range is worthwhile. This is an extended type of payback period in which the time value of money is considered. Also, one can use the discounted cash flows to calculate the number of years needed to satisfy the initial Given the observations below: Certain cash inflows have occurred over the years as a result of the same initiative. We compute the discounted cash flows at a fixed discount rate using the time value of money. The discounted cash flows are shown in column C above, and column D shows the initial outflow that is offset each year by the projected discount cash inflows. The payback period will be between years 5 and 6. Now, because the project’s life is seen to be 6 years, and the project returns in a shorter period, we can infer that this project has a better NPV. As a result, selecting this project, which has the potential to add value to the business, will be a wise decision. Cash Budgeting Example #4 Using the Profitability Index budgeting method to choose between two projects that are tentative with a given business. The following are the expected cash inflows from the two projects: The Project A Profitability Index is $1.16. Project B’s profitability index is $0.90. The profitability index also includes converting the business’s daily predicted future cash inflows using a discount rate, which is usually the WACC percent. The number of these present values of potential cash inflows is compared to the initial expenditure to calculate the profitability index. If the Profitability index is greater than one, it indicates that inflows are more beneficial than outflows. In this case, Project A has an index of $1.16 compared to Project B, which has an index of $0.90, indicating that Project A is clearly a better option than Project B and is thus chosen. Benefits of Capital Budgeting 1. It aids in the decision-making process when it comes to investment opportunities. 2. Control over the company’s expenditures is adequate. 3. Promotes an understanding of risks and their implications for business. 4. Increase the wealth of shareholders and improve market holdings 5. Avoid over or underinvestment. Constraints of Capital Budgeting 1. Decisions are made for a long time and, as a result, are not usually reversible. 2. Because of the subjective risk and discounting factor, the nature of the introspection is introspective. 3. Few techniques or calculations are based on assumptions; therefore, uncertainty may result in incorrect application. Methods of Capital Budgeting: An Overview 1. A discounted payback period takes into account the time worth of money but does not account for cash flow. 2. It disregards all cash flow although its payback time is straightforward and quick. 3. The internal rate of return (IRR) is a single metric (return), but it has the potential to be misleading. 4. Net present value (NPV) is a financially solid method of ranking projects across several categories. How can organizations effectively communicate the results of their capital budgeting analysis? Organizations can effectively communicate the results of their capital budgeting analysis by presenting clear, concise, and well-organized data and analysis, and by making sure that the results are easy for stakeholders to understand. What role do financial projections play in capital budgeting? Financial projections play a crucial role in capital budgeting, as they provide the data and analysis needed to evaluate the potential financial outcomes of different investment options. What factors should organizations consider when making capital budgeting decisions? Organizations should consider a wide range of factors when making capital budgeting decisions, including the potential risks and rewards of each investment, the availability of financial resources, the organization’s goals and objectives, and the impact of each investment on the organization’s financial and operational performance. How can organizations ensure that their capital budgeting decisions align with their overall business strategy? Organizations can ensure that their capital budgeting decisions align with their overall business strategy by considering their goals and objectives, and by involving key stakeholders in the capital budgeting process. What role do stakeholders play in capital budgeting? Stakeholders play a critical role in capital budgeting, as their input and feedback can help organizations make better investment decisions, and can ensure that the results of the capital budgeting analysis are well-received and effectively communicated. Capital budgeting is an integral and critical process for an organization to select between projects in the long run. It is a protocol that must be followed before investing in any long-term project or company. It provides management with methods for properly calculating returns on investment and also making measured judgments to understand whether the selection will be advantageous for increasing the company’s value in the long run or not. Capital Budgeting FAQs What is capital budgeting and why is it important? Capital budgeting is important because it creates accountability and measurability. … The capital budgeting process is a measurable way for businesses to determine the long-term economic and financial profitability of any investment project. A capital budgeting decision is both a financial commitment and an investment. What is capital budgeting formula? The capital cost factors in the cash flow during the entire lifespan of the product and the risks associated with such a cash flow. Then, the capital cost is calculated with the help of an estimate. Formula: Net Present Value (NPV) = Rt. Which is first in capital budgeting? Project Generation Generating a proposal for investment is the first step in the capital budgeting process. Why is NPV the best capital budgeting method? Each year’s cash flow can be discounted separately from the others making NPV the better method. The NPV can be used to determine whether an investment such as a project, merger, or acquisition will add value to a company.
{"url":"https://businessyield.com/finance-accounting/capital-budgeting/","timestamp":"2024-11-09T15:58:20Z","content_type":"text/html","content_length":"242625","record_id":"<urn:uuid:1550151a-ec61-4b65-af58-817be653de52>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00075.warc.gz"}
Crack various integer and floating-point data formats Version on this page: 3.1 LTS Haskell 22.40: 3.4 Stackage Nightly 2024-11-07: 3.14 Latest on Hackage: 3.14 This version can be pinned in stack with:crackNum-3.1@sha256:2fa562f83a19c732935b4771d9f23362ee28467b9aa92aac566c98f456a861f3,1234 Module documentation for 3.1 There are no documented modules for this package. Decode/Encode Integers, Words, and IEE754 Floats On Hackage: http://hackage.haskell.org/package/crackNum Example: Encode a decimal numer as a single-precision IEEE754 number $ crackNum -fsp -- -2.3e6 Satisfiable. Model: ENCODED = -2300000.0 :: Float S ---E8--- ----------S23---------- Binary layout: 1 10010100 00011000110000110000000 Hex layout: CA0C 6180 Precision: Single Sign: Negative Exponent: 21 (Stored: 148, Bias: 127) Classification: FP_NORMAL Binary: -0b1.0001100011000011p+21 Octal: -0o1.061414p+21 Decimal: -2300000.0 Hex: -0x2.3186p+20 Rounding mode: RNE: Round nearest ties to even. Example: Decode a single-precision IEEE754 number float from memory-layout $ crackNum -fsp 0xfc00 abc1 Satisfiable. Model: DECODED = -2.6723903e36 :: Float S ---E8--- ----------S23---------- Binary layout: 1 11111000 00000001010101111000001 Hex layout: FC00 ABC1 Precision: Single Sign: Negative Exponent: 121 (Stored: 248, Bias: 127) Classification: FP_NORMAL Binary: -0b1.00000001010101111000001p+121 Octal: -0o2.00527404p+120 Decimal: -2.6723903e36 Hex: -0x2.02AF04p+120 $ crackNum -fdp 0xfc00 abc1 7F80 0001 Example: Decode a custom (2+3) IEEE754 float from memory-layout $ crackNum -f2+3 0b10011 Satisfiable. Model: DECODED = -0.75 :: FloatingPoint 2 3 S E2 S2 Binary layout: 1 00 11 Hex layout: 13 Precision: 2 exponent bits, 2 significand bits Sign: Negative Exponent: 0 (Subnormal, with fixed exponent value. Stored: 0, Bias: 1) Classification: FP_SUBNORMAL Binary: -0b1.1p-1 Octal: -0o6p-3 Decimal: -0.75 Hex: -0xcp-4 Example: Encode an integer as a 7-bit signed word $ crackNum -i7 12 Satisfiable. Model: ENCODED = 12 :: IntN 7 Binary layout: 000 1100 Hex layout: 0C Type: Signed 7-bit 2's complement integer Sign: Positive Binary: 0b1100 Octal: 0o14 Decimal: 12 Hex: 0xc Usage info Usage: crackNum value OR binary/hex-pattern -i N Signed integer of N-bits -w N Unsigned integer of N-bits -f fp Floating point format fp -r rm Rounding mode to use. If not given, Nearest-ties-to-Even. -h, -? --help print help, with examples -v --version print version info crackNum -i4 -- -2 -- encode as 4-bit signed integer crackNum -w4 2 -- encode as 4-bit unsigned integer crackNum -f3+4 2.5 -- encode as float with 3 bits exponent, 4 bits significand crackNum -f3+4 2.5 -rRTZ -- encode as above, but use RTZ rounding mode. crackNum -fbp 2.5 -- encode as a brain-precision float crackNum -fdp 2.5 -- encode as a double-precision float crackNum -i4 0b0110 -- decode as 4-bit signed integer, from binary crackNum -w4 0xE -- decode as 4-bit unsigned integer, from hex crackNum -f3+4 0b0111001 -- decode as float with 3 bits exponent, 4 bits significand crackNum -fbp 0x000F -- decode as a brain-precision float crackNum -fdp 0x8000000000000000 -- decode as a double-precision float - For encoding: - Use -- to separate your argument if it's a negative number. - For floats: You can pass in NaN, Inf, -0, -Inf etc as the argument, along with a decimal float. - For decoding: - Use hexadecimal (0x) or binary (0b) as input. Input must have one of these prefixes. - You can use _,- or space as a digit to improve readability for the pattern to be decoded VIM users: You can use the https://github.com/LeventErkok/crackNum/blob/master/crackNum.vim file to use CrackNum directly from VIM. Simply locate your cursor on the text to crack, and use the command :CrackNum options. Version 3.1, 2021-03-29 Version 3.0, 2021-03-29 • A complete rewrite, much simplified, and supporting arbitrary precision floats. Some of the old features and the library are dropped; so if you rely on the library nature of CrackNum, do not upgrade. For other users who merely use crackNum as an executable, the new version is strongly recommended. Version 2.4, 2020-09-05 • Changes required to compile cleanly with GHC 8.10.2 Version 2.3, 2018-11-17 • Remove dependency on the ieee754 and reinterpret-cast packages. The goal is to remove any FFI dependencies. We now define and export the required utilities directly in the CrackNum package. Version 2.2, 2018-09-01 • Instead of data-binary-ieee754, use reinterpret-cast package. According to documents, the former is deprecated. Version 2.1, 2018-07-20 • Support for vi-editor bindings. See the file “crackNum.vim” in the distribution or in the github repo You can put “so ~/.vim/crackNum.vim” (use the correct path!) and have vi crack numbers directly from inside your editor. Simply locate your cursor on a binary/hex stream of digits and type “:CrackNum”. See the “crackNum.vim” file for binding details. Version 2.0, 2018-03-17 • Import FloatingHex qualified to avoid GHC 8.4.1 compilation issue Version 1.9, 2017-01-22 • Minor fix to printing of +/-0 Version 1.8, 2017-01-15 • Bump up FloatingHex dependency to >0.4, this enables proper support for large doubles Version 1.7, 2017-01-14 • Fix a snafu in reading hexadecimal floats Version 1.6, 2017-01-14 • Add support for hexadecimal-floats. These now work both in toIEEE option as input, and also when printing the values out. (i.e., numbers of the form 0x1.abp-3, etc.) Version 1.5, 2016-01-23 • Typo fixes; no functionality changes Version 1.4, 2016-01-17 • Fix NaN nomenclature: Screaming->Signaling • Add an example to README.md Version 1.3, 2015-04-11 • Fix docs, github location Version 1.2, 2015-04-11 • Fix the constant qnan values for SP/DP • Add conversions from float/double. Much easier to use. • Better handling of nan values. Version 1.1, 2015-04-02 • Clean-up the API, examples etc. Version 1.0, 2015-04-01 • First implementation. Supports HP/SP/DP and signed/unsigned numbers in 8/16/32/64 bits.
{"url":"https://www.stackage.org/lts-18.8/package/crackNum-3.1","timestamp":"2024-11-07T22:23:57Z","content_type":"text/html","content_length":"21292","record_id":"<urn:uuid:19211aa9-0478-42dd-b543-dc9c316ba438>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00495.warc.gz"}
🎓 Doc Freemo :jpf: 🇳🇱 (@freemo@qoto.org) @freemo unless you can logically deduce from the the coins you've seen that it must be solved or must not be solved, you cannot make the assumption in either direction. @khird I actually created a notation/process where you can determine if a configuration is solvable. I didnt have a chance to try it out on various combos. Works like this. Represent selection modes with binary numbers. So selecting one out of 4 boxes might look like 0001, 2 selected out of 4 would be 0011 or 0101. To see if two selection modes are equivalent reduce them to what im calling "standard form". What that means is you left or right shift the binary number (taking any digits that roll off and replacing it on the other end) and do this until the number is in its smallest representation. So 1010 would have a standard form of 0101 (most significant digits on the left). If the standard form of two selection modes are the same then they are equivelant. Next compute all possible selection modes in standard form for the given parameters. For the original puzzle this would be: No other selection modes are possible. Next do an OR operation on all possible selection modes, here we get: If the resulting binary number has one or fewer 0s then a solution is possible. If it has greater then a solution is impossible.
{"url":"https://qoto.org/@freemo/107645321894904551","timestamp":"2024-11-09T10:49:56Z","content_type":"text/html","content_length":"106258","record_id":"<urn:uuid:7fe07d21-61d1-4299-b16c-805abc66d486>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00437.warc.gz"}
Gain-Switched Short Pulse Generation from 1.55 µm InAs/InP/(113)B Quantum Dot Laser Modeled Using Multi-Population Rate Equations Radio Frequency and Photonics Engineering, Dresden University of Technology, 01069 Dresden, Germany Electrical and Electronics Engineering, University of Gaziantep, 27310 Gaziantep, Turkey Author to whom correspondence should be addressed. Submission received: 22 September 2022 / Revised: 19 October 2022 / Accepted: 24 October 2022 / Published: 17 November 2022 The gain-switching properties of an InAs-InP (113)B quantum dot laser based on multi-population rate equations are examined theoretically in detail to generate shorter pulses with the application of a Gaussian pulse beam to the laser excited state. The numerical results demonstrated that as the homogeneous and the inhomogeneous broadening increase, the differential gain, the gain compression factor, and the threshold current of the excited state decrease while the threshold current of the ground state increases. It was also observed that the contribution of the excited state to gain-switched output pulses depend on not only the value of the inhomogeneous broadening but also the magnitude of the applied current. Additionally, it was found that without an optical beam, the output pulse has a long pulse width due to ground state emissions and the change in the parameters strongly affecting the output. However, under the optical beam, narrow pulses around 26 ps have high peak power owing to the excited state emission generated even at low currents and also the change in the laser parameters having a negligible effect. Finally, the gain-switching characteristics with and without a Gaussian pulse beam are shown to be similar for liner-gain and nonlinear-gain cases except that higher peak power and narrower output pulses are obtained for the linear-gain case. 1. Introduction Quantum dot lasers have superior features compared to quantum well counterparts [ ] such as temperature independence, a low threshold current [ ], they can be operated with low chirp for both ground state (Grs) and excited state (Exs) lasing [ ], and have better resistance values for optical feedback [ ]. The feature of being relatively insensitive to temperature makes quantum dot (Q-Dot) lasers avoid the need for thermoelectric coolers [ ]. Since these mentioned features mean that Q-Dot lasers are compact, cheap, lightweight, and low power systems, they are appropriate candidates for applications in optical communications [ Low-cost directly adjustable lasers will play an important role in next-generation telecommunication links for developing uncooled and non-insulating communication devices. As a result, Q-Dot lasers are very promising for such applications. In optical communication systems, a 1.55 µm light source is required because of the low loss in transmission. InGaAs-GaAs Q-Dot devices do not allow laser emissions over 1.45 µm, which causes information loss in long-distance transmissions. For this reason, to achieve standard long-distance transmission, one uses long-haul optical transmission at a wavelength of up to 1.55 µm. Consequently, one chooses the InAs-InP(113)B Q-Dot lasers with the InAs Q-Dot laser grown on an InP substrate that emits at a wavelength of 1.55 µm [ ]. Owing to their simple fabrication methods and low manufacturing costs, the gain-switched semiconductor laser diodes are preferred over their counterparts. Therefore, in this study, we used the gain-switching method to obtain short pulses. To date, although several studies have been performed to investigate 1.55 µm InAs-InP Q-Dot lasers, to the best of our knowledge, no detailed work has been performed under the gain-switching condition for InAs-InP Q-Dot lasers based on multi-population rate equations involving nonlinear gain. For this, an external optical Gaussian beam (EOGB) was applied to the excited state (Exs) of the Q-Dot laser with current injection to the wetting layer (Wly) to investigate the properties of the gain-switched pulses. Although, Q-Dot lasers have superior performance, they cannot always satisfy the expected features in reality due to the homogeneous and inhomogeneous broadenings and the gain compression factor [ ]. To obtain an appropriate and suitable approach to experimental results theoretically, as many aspects of real experiments as possible should be taken into account when performing the simulation. For this, the effect of the homogeneous and the inhomogeneous broadening with the effect of the gain compression were taken into consideration in this study. Therefore, this paper is organized as Section 2 introduces the theoretical description of multi-population rate equations for the direct relaxation model considering the nonlinear-gain case. The obtained results discussed in Section 3 show that by applying an EOGB into the Exs, very short pulses, with a pulse width of around 26 ps and high peak power, are generated due to the Exs emissions at low currents. In addition, it was shown that the contribution of excited state to gain-switched output pulses does not depend on only the value of the inhomogeneous broadening but also on the magnitude of the applied current. Furthermore, the effect of homogeneous and inhomogeneous broadening on the differential gain, the gain compression factor, and the threshold current is also investigated. Finally, our results are summarized and concluded in Section 4 2. Materials and Methods The laser model used in this study for InAs-InP(113)B Q-Dot is based on the multi population rate equations for the direct relaxation model described in [ ]. The laser carrier and photon density equations were solved by the fourth order Runge–Kutta method using MATLAB software to investigate the carrier dynamics in the two lowest energy levels, Grs and Exs. The initial values of carrier and photon densities were taken to be a zero. A stimulated emission term was also added to the Exs to allow lasing from both states. The effect of temperature and carrier loss were neglected in the study. We assumed that the carriers were directly injected from the contacts to the Wly; therefore, the carrier dynamics were not considered in the barrier. Direct transition from the Wly to the Grs was introduced to reproduce the experimental results [ ]. The Q-Dot active region consists of the Q-Dots ensemble having different sizes. In the model, in order to consider the effect of the inhomogeneous broadening (Γ ), the Q-Dot ensemble is divided into 2X + 1 groups, depending on their resonant energies for the interband transition [ Figure 1 shows the relaxation mechanisms in the xth Q-Dot subgroup. Energies of Exs and Grs of xth Q-Dot are represented as $E E x s _ x$ $E G r s _ x$ , respectively. As a result, the longitudinal cavity photon modes of up to 2P + 1 are constructed in the cavity [ ]. When the index x is equal to X, this situation corresponds to the central Q-Dot group. When index p is equal to P, this case corresponds to the central mode with the transitional energies of $E E x s 0$ $E G r s 0$ for Exs and Grs, respectively. Each Q-Dot group energy width ( $Δ E$ ) and mode energy separation ( $Δ E p$ ) are assumed to be equal and taken to be as 1 meV [ ]. The xth Q-Dot group energy and pth mode energies are indicated by: $E E x s _ x , G r s _ x = E E x s 0 , G r s 0 − ( X − x ) Δ E x = 1 , 2 , … , 2 X + 1 ,$ $E E x s _ p , G r s _ p = E E x s 0 , G r s 0 − ( P − p ) Δ E p p = 1 , 2 , … , 2 P + 1 ,$ When injection current I is applied to the Wly of the Q-Dot laser, some carriers move to the lower state $E E x s − x$ and $E G r s − x$ with a capture time of $τ W l y − E x s _ x$ for the transition from the Wly to $E E x s _ x$ and a relaxation time of $τ W l y − G r s _ x$ for the Wly to the $E G r s _ x$ transition. Some photons are emitted from the Wly due to spontaneous emission over a time $τ W l y − r$. However, in the Exs, some carriers are relaxed into the $E G r s _ x$ with a relaxation time $τ Exs _ x − Grs _ x$. Furthermore, the more energetic carriers are thermally transferred to the Wly with a time $τ Exs _ x − Wly$. The other carriers recombine spontaneously with an emission time $τ Exs − r$ or by the stimulated emission of photons. The same mechanism for the carrier dynamics transitions is applied at the Grs level. The same processes occur for the carrier population in the Grs level with regard to the Exs. The capture and relaxation times can be calculated [ ] as: $τ W l y − E x s _ x = 1 ( A W l y + C W l y N W l y ) ( 1 − f E x s _ x ) ( G x E x s ) ,$ $τ E x s _ x − G r s _ x = 1 ( A E x s + C E x s N W l y ) ( 1 − f G r s _ x ) ,$ $τ W l y − G r s _ x = 1 ( A W l y + C W l y N W l y ) ( 1 − f G r s _ x ) ( G x G r s ) ,$ $, N W l y$ is the carrier density in the Wly, $A W l y , E x s$ $C W l y , E x s$ are the phonon and Auger coefficients in the Wly and Exs, respectively. Their values are estimated experimentally [ $f E x s _ x , G r s _ x$ is the occupation probabilities in xth group of Q-Dot in the Exs and Grs. $f E x s _ x = N E x s _ x μ E x s N o G x E x s , f G r s _ x = N G r s _ x μ G r s N o G x G r s ,$ • $μ E x s , G r s$ is the degeneracy of the Exs and Grs, • $N o$ is the Q-Dot density and • $N E x s _ x , G r s _ x$ is the carrier density in the Exs and Grs of xth Q-Dot. • $G x E x s , x G r s$ is the density rate of xth Q-Dot in the Exs and Grs. To calculate $G x E x s , x G r s$ , the Q-Dot size distribution is assumed to be a Gaussian function given as: $G x E x s , x G r s = G i n h , E x s , G r s ( E E x s _ x , G r s _ x − E E x s 0 , G r s 0 ) Δ E ,$ $G i n h , E x s , G r s ( E E x s _ x , G r s _ x − E E x s 0 , G r s 0 ) = 1 2 π σ e x p ( − ( E E x s _ x , G r s _ x − E E x s 0 , G r s 0 ) 2 2 σ 2 ) ,$ The full-width half-maximum of the Gaussian function is given as Γ = 2.35 σ. In other words, Γ is described as inhomogeneous broadening. The carrier escape time is related to the carrier capture time [ ] and given as: $τ E x s _ x − W l y = τ W l y − E x s _ x μ E x s μ W l y e ( E W l y − E E x s _ x ) / k B T ,$ $τ G r s _ x − W l y = τ W l y − G r s _ x μ G r s μ W l y e ( E W l y − E G r s _ x ) / k B T ,$ $τ G r s _ x − E x s _ x = τ E x s _ x − G r s _ x μ G r s μ E x s e ( E E x s _ x − E G r s _ x ) / k B T ,$ $μ W l y$ is the degeneracy of the Wly, k is the Boltzmann constant, T is the temperature, and E is the energy of the Wly. Change in the carrier density in the Wly, Exs, and Grs and change in the photon density in the Exs and Grs for the multi-modes rate equations are given as: $d N W l y d t = I q V + ∑ x N E x s _ x τ E x s _ x − W l y + ∑ x N G r s _ x τ G r s _ x − W l y − N W l y τ W l y − E x s ¯ − N W l y τ W l y − G r s ¯ − N W l y τ W l y − r ,$ $d N E x s _ x d t = N W l y τ W l y − E x s _ x ¯ + N G r s _ x ( 1 − f E x s _ x ) τ G r s _ x − E x s _ x − N E x s x τ E x s x − G r s x − N E x s x τ E x s x − W l y − N E x s x τ E x s − r − Γ v g ∑ p g p x E x s S E x s p + Γ v g ( 1 − f E x s x ) g p x E x s ( 2 f E x s x − 1 ) o p t ,$ $d N G r s _ x d t = N W l y τ W l y − G r s ¯ + N E x s _ x τ E x s _ x − G r s _ x − N G r s x ( 1 − f E x s x ) τ G r s x − E x s x − N G r s x τ G r s x − W l y − N G r s x τ G r s − r − Γ v g ∑ p g p x G r s S G r s p ,$ is the volume, is the charge, is the confinement factor, and is the group velocity. $τ W l y − E x s ¯$ , and $τ W l y − G r s ¯$ indicate the average capture times from the Wly to Exs and from the Wly to Grs in Q-Dot ensemble. They are defined as follows: $τ W l y − E x s ¯ = ∑ x 1 ( A W l y + C W l y N W l y ) ( 1 − f E x s _ x ) ( G x E x s ) ,$ $τ W l y − G r s ¯ = ∑ x 1 ( A W l y + C W l y N W l y ) ( 1 − f G r s _ x ) ( G x G r s ) ,$ Nonlinear gain in Exs and Grs is given as: $g p x E x s = μ E x s π q 2 ℏ c n r ε 0 m 0 2 N o | P E x s σ | 2 E E x s _ x ( 2 f E x s _ x − 1 ) G x E x s L E x s ( E E x s _ p − E E x s _ x ) 1 1 + ε E x s p S E x s p ,$ $g p x G r s = μ G r s π q 2 ℏ c n r ε 0 m 0 2 N o | P G r s σ | 2 E G r s x ( 2 f G r s _ x − 1 ) G x G r s L G r s ( E G r s _ p − E G r s _ x ) 1 1 + ε G r s p S G r s p ,$ , and are the speed of light, dielectric constant in free space, Planck constant, refractive index of active medium, and free mass of electron, respectively. $| P E x s , G r s σ | 2$ , is the transition matrix element [ ] and it is estimated approximately at 2m for InAs [ $S E x s _ p , G r s _ p$ is the photon density of the th mode emitted from Exs and Grs. The homogeneous broadening of the stimulated emission process is assumed to be Lorentzian such that $L E x s , G r s ( E G r s _ p − E G r s _ x )$ $L E x s , G r s ( E E x s _ p , G r s _ p − E E x s _ x , G r s _ x ) = Γ h o m / π ( E E x s _ p , G r s _ p − E E x s _ x , G r s _ x ) 2 + ( Γ h o m ) 2 ,$ $Γ h o m$ is the full-width half-maximum of homogeneous broadening. The gain saturation parameter, $ε E x s p , G r s p$ of the Exs and Grs, is given as: $ε E x s p , G r s p = h q 2 τ p | P E x s , G r s ∂ | 2 2 n r 2 ϵ 0 m 0 2 E E x s _ p , G r s _ p L E x s , G r s ( E E x s _ p , G r s _ p − E E x s 0 , G r s 0 ) ,$ $τ p$ is the photon lifetime and it is computed from the following equations: $1 τ p = v g [ α i n t + 1 2 L ln ( 1 R 1 R 2 ) ] ,$ where R1 and R2 are the reflectivities of the mirrors and is the length of the laser. $α i n t$ is the internal loss, while the mirror loss is calculated using to the equation: $ln ( 1 / R 1 R 2 ) / ( 2 L n r )$ $L E x s , G r s ( E E x s _ p , G r s _ p − E E x s 0 , G r s 0 )$ in Equation (20) is the Lorentzian function and given as: $L E x s , G r s ( E E x s _ p , G r s _ p − E E x s 0 , G r s 0 ) = Γ h o m / π ( E E x s _ p , G r s _ p − E E x s 0 , G r s 0 ) 2 + ( Γ h o m ) 2 ,$ The photon density in Exs and Grs is expressed as: $d S E x s p d t = Γ v g ∑ x g p x E x s S E x s p − S E x s p τ p + β ∑ x ( L E x s ( E E x s p − E E x s x ) N E x s τ E x s − r ) Δ E p ,$ $d S G r s p d t = Γ v g ∑ p g p x G r s S G r s p − S G r s p τ p + β ∑ x ( L G r s ( E G r s _ p − E G r s _ x ) N G r s τ G r s − r ) Δ E p ,$ is the spontaneous coupling factor. in Equation (13) is the photon density due to the applied EOGB to Exs in a round-trip time τ = 2 L/v . This term is equal to the number of photons per second per volume irradiating the Exs level in a single round-trip. $o p t = P i ( 2 L / v g ) q E E x s _ x V ,$ P[i] indicates the applied peak power of the Gaussian pulse to Exs. The parameters used in the simulations are given in Table 1 . The values of these parameters were obtained from [ In the algorithm, firstly the constant values and parameters of the laser are determined from Table 1 . After that, since the process repeats for every Q-Dot and every mode, the number of modes (i.e., ) and number of Q-Dots (i.e., ) are defined. As a result, the created simulation consists of three intertwined loops. Before starting all loops, the current is applied to the Wly of the laser. The outermost loop returns as many as the number of quantum dots we have determined (here, our results are relevant to three Q-Dots). In this loop, the energy differences are found with Equation (1). Inside the quantum dot loop, there is a second loop that is repeated according to the mode number. Within the mode loop, the energy differences between the modes are calculated by Equation (2). After these steps are completed, the homogeneous and inhomogeneous broadenings are calculated using Equations (8) and (19). By using the calculated homogeneous and inhomogeneous broadenings and the equations that are numbered as Equations (17), (18) and (20), the material gain and gain compression factor of Exs and Grs are calculated. After that, the third loop that provides the calculation of carrier and photon densities of Exs and Grs is started. In this loop, the rate equations defined as Equations (12)–(14), (23) and (24) are solved using fourth order Runge–Kutta Method. When this loop is completed, the photon and carrier density values of the relevant mode of the related quantum dot is reached. Then, the loops repeated by the number of modes and repeated by the number of quantum dots are finished, respectively. In our study, three Q-Dots and three modes were examined and, according to this algorithm, there are three modes for each Q-Dot. Since there are three Q-Dots, the center Q-Dot corresponds to the second Q-Dot. Similar to other Q-Dots, there are three modes for the second Q-Dot. Additionally, the results are written using the modes of the center Q-Dot (it is the second Q-Dot for our simulation). The sum of the modes can be calculated by summing the three modes taken from the second Q-Dot. 3. Discussion and Results A 1.55 μm InAs-InP (113)B Q-Dot laser was used in the simulation. The following equation was used to calculate the applied AC current , with frequency , and amplitude $I ( t ) = I r f 2 ( | cos ( 2 Π f t ) | − cos ( 2 Π f t ) ) ,$ Unlike previous studies, here, the nonlinear gain was included in multi-population rate equations. X and P are taken as X = P = 1 (i.e., it was assumed that there are three Q-Dot ensembles) and an Γ[hom] of 15 meV and Γ[ihom] of 45 meV has been used in the following results unless stated otherwise. For these values the gain compression factor, $ε E x s p , G r s p$ is calculated as 7.8 × 10^−16 cm^3 for Exs and Grs. To observe the radiation simultaneously from both Exs and Grs, the I [rf] was taken to be 40 mA in the simulations. Since Γ and Γ affect the threshold current ( ), the differential gain and the gain compression factor [ ], first without EOGB, and the effect of Γ and Γ on these mentioned parameters were investigated. Subsequently, an EOGB was applied to the Exs to observe how the optical beam illumination affects the gain-switching output pulses. was calculated as 30 mA for Exs and 2 mA for Grs for the linear-gain case ( $ε Exs , Grs$ = 0) (see Figure 2 a); 21 mA for Exs and 2 mA for Grs for the nonlinear-gain case ( $ε Exs , Grs$ ≠ 0) were obtained (see Figure 2 b). The total threshold current (Grs + Exs) for both cases was calculated as 2 mA. As seen in Figure 2 b, deviation from Figure 2 a due to $ε Exs , Grs$ is because of the direct relaxation from the Wly to Grs. changes between 30 and 80 meV at room temperature [ ], therefore, we changed it from 30 meV to 80 meV. In this case, Γ is taken as to be 15 meV at room temperature [ ]. Similarly, since the range of Γ is between 10 and 30 meV [ ], Γ is changed from 10 to 30 meV and Γ is taken to be 45 meV. For the center subgroup of the Q-Dot (second subgroup), when Γ is changed from 30 meV to 80 meV, as the I of Exs drops from 27 mA to 10 mA the I of Grs increases from 1 mA to 6 mA (see Figure 3 ). If Γ is greater than 70 meV, the threshold current increases as the photon density of Grs decreases and finally the threshold currents of Grs and Exs become the same at 11 mA. The effect of Γ on the differential gain is shown in Figure 4 . As seen in the figure, as Γ increases, the differential gain of Exs and Grs decreases. Similar results were also observed in [ The behavior of Γ and on differential gain is similar to that of Γ , providing similar differential gain characteristics as in Figure 4 when Γ is increased from 10 meV to 30 meV. For the center subgroup of the Q-Dot, as Γ is increased from 10 meV to 30 meV, of Exs decreases up to 22.5 meV (dropping from 26 mA to 10 mA) and after that point it slightly increases. of Grs increases from 1 mA to 6 mA (see Figure 5 ). When Γ is greater than 22.5 meV the photon density of Grs decreases, whereas the threshold current increases, yielding a threshold current of 14 mA, which is equal to that of the Exs. Figure 6 indicates the effect of Γ on the gain compression factors of Exs and Grs. As seen in the figure, the gain compression factor decreases with the increasing Γ As seen in the results, the differential gain of Exs is greater than that of the Grs because degeneracy of the Exs is twice that of the Grs. However, the gain compression factor is the same for Grs and Exs. Our results also showed that the output power decreases with the increasing Γ[hom] and Γ[ihom]. As mentioned before, the threshold current was obtained at 2 mA for the Grs and 24 mA for the Exs for the nonlinear-gain case. Therefore, to observe the gain-switched output pulses and also the simultaneous emission from both the Grs and the Exs, we applied I of 40 mA. Figure 7 indicates the gain-switched output pulses for an I of 40 mA. As shown in the figures, the Grs pulse width is longer (370 ps), while the Exs pulse width is narrow (43 ps). It can be also observed from the figure that the Exs and Grs together contribute to the output pulses since the applied current magnitude is greater than the threshold currents of both states. Therefore, the generated pulses are due to both Exs and Grs emission. The total (Exs + Grs) pulse width is 255 ps and the peak power is 28 mW. As seen in the results, the pulse width of the gain-switched output pulses are long. We also observed that increasing the injection current leads to both the peak power and pulse width increasing. The reason for the increase in the output pulse width with the current is that, although the photon density of the Grs increases with the current, the Grs photon density decreases slowly after reaching the maximum value, as seen in Figure 7 . However, the Exs photon density decreases rapidly compared to that of the Grs, yielding a shorter output pulse. It can be said that the long pulses in the InAs-InP (113)B lasers are emitted from the InP ground state. For InGaAs-GaAs lasers, it was shown that [ ] Grs emission is completely saturated in the light-current characteristics, while the Exs emission increases with the increasing current. As a result, if the injection current is increased, the Exs radiation becomes dominant over that of Grs, yielding shorter pulses owing to Exs radiation. Investigation on the InAs-GaAs monolithic Q-Dot lasers revealed that, when the applied injection current increases, the width of the gain-switched pulse decreases [ ]. However, in the case of InAs-InP (113)B lasers, the Grs emission does not go to saturation completely (see Figure 2 ) with the increasing injection current, instead both Grs and Exs emissions increase with the increasing current. Therefore, as mentioned before, the output pulse width of InAs-InP lasers increases with the increasing injection current as compared to that of InAs-GaAs lasers. Furthermore, our results showed that as Γ and Γ increase the of Exs decreases, whereas of Grs increases (see Figure 3 Figure 5 ). Therefore, according to the magnitude of the applied current, even with a smaller value of Γ , the contribution of Exs to output pulses is possible. In order to show this, 25 mA of current is applied and the gain-switched output pulses were obtained for Γ = 30 meV (Γ < Δ = E =48 meV) and Γ = 55 meV (Γ > Δ = 48 meV). As seen in Figure 8 Figure 9 , the output pulse with a full-width half-maximum (FWHM) of 386 ps and peak power of 26 mW for Γ = 30 meV is generated from Grs emission only. However, since of Exs decreased for Γ = 55 meV, both Grs and Exs contribute the output pulse providing an FWHM of 233 ps and peak power of 10 mW. If we apply a current greater than the peak current of 25 mA, for example 60 mA for Γ = 30 meV, both Grs and Exs contribute to the lasing process simultaneously, as shown in Figure 10 producing an FWHM of 478 ps and peak power of 53 mW. Briefly, we can say that the contribution of Exs to gain-switched output pulses depend on not only the value of Γ , which is smaller or greater than the energy difference between Exs and Grs, but also on the magnitude of the applied current. In addition, it can be also observed from the results that the width of pulses are long due to dominant effect of the Grs emission as mentioned before. Wang et al. [ ] showed that if Γ is smaller than the energy difference between Exs and Grs (Δ = E ), lasing occurs only due to Grs if Γ is greater than the Δ ; both Grs and Exs contribute to the lasing process. However, as seen from our results, Exs lasing depends on the magnitude of current as well as on the value of Γ According to obtained results, we can say that it is impossible to generate gain-switched short pulses with a high peak power as long as the Grs emission is dominant over the Exs emission for InAs-InP QD lasers. Since the threshold current of Exs is much higher than that of Grs and increasing the injection current makes both Grs and Exs emissions increase, Exs emissions cannot be dominant over Grs emissions. Therefore, in order to obtain gain-switched short pulses with high peak power at low currents, the Exs emission must be sustained while Grs emission must be suppressed. For this reason, an external Gaussian pulse beam to the Exs was applied. When an optical beam is applied to the Exs, depending on the peak value of the applied optical beam, the threshold currents of Grs and Exs can become zero (see Figure 11 ). Furthermore, the photon density of the Exs can exceed that of the Grs up to a certain current range. This yields gain-switched short pulses exhibiting high power at low currents. Figure 11 shows light vs. dc current characteristics obtained by applying an EOGB with a peak power of 10 mW and a width of 10 ps. In order to see the zero-threshold current for both the states, the dc current was applied up to 50 mA. As seen in the figure, the power of Exs is greater than that of the Grs up to some current value with the application of optical beam and the threshold currents become zero for both states as explained before. Figure 12 indicates the gain-switched output pulses under the optical beam having a peak power of 10 mW and a width of 10 ps for I of 12 mA. As seen in the figure, Exs emission is dominant over Grs emission, which means the output pulse is generated due to Exs emission. Therefore, the width of the output pulse is narrow (26 ps) and the peak power is high (82 mW) even though the applied current is low. Additionally, the peak power of EOGB must be increased to further increase the peak power of the output pulse. Figure 13 shows output pulses for an optical beam peak power of 20 mW for I of 12 mA. As seen in the figure, while the peak power of the output increases, the width of the output pulse slightly increases and provides a value of 27 ps. Furthermore, according to the applied current, we can adjust the magnitude of the optical beam to obtain short pulses. It was also found that changes in the laser parameters do not affect the output pulse width and peak power significantly in the presence of the optical beam. However, without EOGB, the output pulses are strongly affected by the change in the laser parameters. Similar results were also obtained for the InAs-InP (113)B quantum dot laser based on single mode rate equations [ Regarding the zero-gain compression factor, our results demonstrated that the behavior of gain-switching characteristics with and without EOGB are similar for liner-gain and nonlinear-gain cases except that higher peak power and narrower output pulses are obtained for the linear-gain case. As a conclusion, when an EOGB is applied to the Exs, the photon emission of Exs becomes dominant over the Grs, providing shorter output pulses with a high peak power. The Exs emission can be tuned for the C-band optical communication window with proper band energy engineering and growth optimization, such as the double cap procedure [ 4. Conclusions In this study, for the first time, the gain-switching properties have been theoretically investigated in detail in the absence and presence of optical beam illumination for direct the relaxation model of InAs-InP (113)B Q-dot lasers. The model is based on multi-population rate equations involving nonlinear gain. The first effect of the homogeneous and the inhomogeneous broadenings on the differential gain, the gain compression factor, and the threshold current was examined in the absence of the optical beam. Subsequently, the gain-switched output pulses were studied in the absence and presence of optical beam illumination. Our results indicated the following: The differential gain and the gain compression factor decrease with the increasing homogeneous and inhomogeneous broadenings. However, while the threshold current of the ground state increases, that of the excited state decreases as the broadenings are increased. While gain-switched pulses are produced only due to ground state emission at small currents, the Exs and Grs emissions contribute to the output pulses at greater currents values (greater than the threshold currents of both excited state and ground state). Since the photon density of the ground state decreases gradually after reaching its maximum value, the output pulses originating from ground state emission have long pulse widths. On the other hand, the photon density of the excited state decreases rapidly after reaching its maximum value, yielding a narrower pulse width. Therefore, when an optical Gaussian pulse beam is applied to the quantum dot laser, gain-switching shorter pulses with high peak power are obtained since excite state emissions dominate ground state emissions. The contribution of excited state to gain-switched output pulses depend on the magnitude of the applied current as well as on the value of the inhomogeneous broadening. In the absence of the optical beam, the laser output is strongly affected by the change in the laser parameters, whereas in the presence of the optical beam, this effect is negligible. The behavior of gain-switching characteristics with and without a Gaussian pulse beam are similar for liner-gain and nonlinear-gain cases except that higher peak power and narrower output pulses are obtained for the linear-gain case. As a result, short pulses with a width of around 26 ps with a high peak power can be generated at low currents by applying an external optical beam to the Exs. These results showed that InAs-InP (113)B quantum dot lasers are a candidate source for many applications as well as optical communication systems. A more sophisticated model such as the investigation of the electron-hole model for multi-population rate equations would be the subject of future work. Author Contributions Methodology, N.D.; software, H.S.D.T. and E.C.; investigation, N.D., H.S.D.T. and E.C.; writing—original draft preparation, N.D.; supervision, N.D.; project administration, N.D. All authors have read and agreed to the published version of the manuscript. This research was funded by The Scientific and Technological Research Council of Turkey (TUBITAK), grant number 119F099. Data Availability Statement Not applicable. Conflicts of Interest The authors declare no conflict of interest. 1. Shimizu, M.; Suzuki, Y.; Watanabe, M. Characteristics of Cavity Round-Trip Time Pulses in Short-Cavity Q-Switched AlGaAs Multiple-Quantum-Well Semiconductor Lasers. Jpn. J. Appl. Phys. 1998, 37, 1040. [Google Scholar] [CrossRef] 2. Grillot, F.; Veselinov, K.; Gioannini, M.; Montrosset, I.; Even, J.; Piron, R.; Homeyer, E.; Louaiche, S. Spectral Analysis of 1.55-m InAs–InP(113)B Quantum-Dot Lasers Based on a Multipopulation Rate Equations Mode. IEEE J. Quantum Electron. 2009, 45, 872. [Google Scholar] [CrossRef] 3. Caroff, P.; Paranthoen, C.; Platz, C.; Dehaese, O.; Folliot, H.; Bertru, N.; Labbe, C.; Piron, R.; Homeyer, E.; Le Corre, A.; et al. High gain and low-threshold InAs QD lasers on InP. J. Appl. Phys. 2005, 87, 243107. [Google Scholar] 4. Saito, H.; Nishi, K.; Kamei, A.; Sugou, S. Low chirp observed in directly modulated quantum dot lasers. IEEE Photon. Technol. Lett. 2000, 12, 1298. [Google Scholar] [CrossRef] 5. Reithmaier, J.P.; Forchel, A. Recent advances in semiconductor quantum-dot lasers. C. R. Phys. 2003, 4, 611. [Google Scholar] [CrossRef] 6. Huang, H.; Duan, J.; Jung, D.; Liu, A.Y.; Zhang, Z.; Norman, J.; Bowers, J.E.; Frédéric, G. Analysis of the optical feedback dynamics in InAs/GaAs quantum dot lasers directly grown on silicon. J. Opt. Soc. Am. B 2018, 35, 2780. [Google Scholar] [CrossRef] 7. Rafailov, E.U.; Cataluna, M.A.; Sibbett, W. Mode-locked quantum-dot lasers. Nat. Photonics 2007, 1, 395–401. [Google Scholar] [CrossRef] 8. Sritirawisarn, N.; Otten FW, M.; van Eijkemans, T.J.; Nötzelet, R. Surface morphology induced InAs quantum dot or dash formation on InGaAs/InP(100). J. Cryst. Growth 2007, 305, 63. [Google Scholar] [CrossRef] 9. Heck, S.C.; Osborne, S.; Healy, S.B.; O’Reilly, E.P.; Lealarge, F.; Pointgt, F.; Le Gouezigou, O.; Accard, A. Experimental and theoretical study of InAs/InGaAs/InP quantum dash laser. IEEE J. Quantum Electron. 2009, 45, 1508. [Google Scholar] [CrossRef] 10. Sugawara, M.; Mukai, K.; Nakata, Y.; Ishikawa, H. Effect of homogeneous broadening of optical gain on lasing spectra in self-assemled In[x]Ga[1–x]As/GaAs quantum dot lasers. Phys. Rev. 2000, 61, 7595. [Google Scholar] [CrossRef] 11. Grillot, F.; Veselinov, K.; Gioannini, M.; Piron, R.; Homeyer, E.; Even, J.; Loualiche, S.; Montrosset, I. Theoretical analysis of 1.55-µm InAs/InP (113)B quantum dot lasers based on a multi-population rate equation model. In Proceedings of the Physics and Simulation of Optoelectronic Devices XVII, San Jose, CA, USA, 24 February 2009; Volume 7211. [Google Scholar] 12. Wang, C.; Gionnini, M.; Montrosset, I.; Even, J.; Grillot, F. Influence of inhomogeneous broadening on the dynamics of quantum dot lasers. In Proceedings of the SPIE OPTO, Physics and Simulation of Optoelectronic Devices XXIII, San Francisco, CA, USA, 28 April 2015; Volume 9357. [Google Scholar] 13. Aleem, M.N.A.; Huessein, K.F.A.; Ammar, A.A. Semiconductor quantum dot lasers as pulse sources for high bit rate data transmission. PIER 2013, 28, 185. [Google Scholar] [CrossRef] 14. Gioannini, M.; Rossetti, M. Time-domain traveling wave model of quamtum dot DFB lasers. IEEE J. Sel. Top. Quantum Electron. 2011, 17, 1318. [Google Scholar] [CrossRef] 15. Gioannini, M.; Bardella, P.; Montrosset, I. Time-domain traveling-wave analysis of multimode dynamics of quantum dot Fabry-Perot lasers. IEEE J. Sel. Top. Quantum Electron. 2015, 21, 1900811. [ Google Scholar] [CrossRef] 16. Veselinov, K.; Grillot, F.; Miska, P.; Homeyer, E.; Caroff, P.; Platz, C.; Even, J.; Dehaese, O.; Loualiche, S.; Marie, X.; et al. Carrier dynamics and saturation effects in (311)B InAs-InP quantum dot lasers. Opt. Quantum Electron. 2006, 38, 369. [Google Scholar] [CrossRef] [Green Version] 17. Xu, D.V.; Yoon, S.F.; Tong, C.Z. Self-consistent analysis of carrier confinement and output power in 1.3-µm InAs-GaAs quantum dot VCSELs. IEEE J. Quantum Electron. 2008, 44, 879. [Google Scholar] 18. Dogru, N.; Adams, M.J. Intensity noise of actively mode-locked quantum dot external cavity laser. J. Light. Technol. 2014, 32, 3215. [Google Scholar] [CrossRef] 19. Dogru, N.; Adams, M.J. Numerical simulation of a mode-locked quantum dot external cavity laser. IET Optoelectron. 2014, 8, 44. [Google Scholar] [CrossRef] 20. Veselinov, K.; Grillot, F.; Cornet, C.; Even, J.; Bekaiarski, A.; Gioannini, M.; Loualiche, S. Analysis of the double laser emission occurring in 1.55-µm InAs-InP(113)B quantum-dot lasers. IEEE J. Quantum Electron. 2007, 43, 810. [Google Scholar] [CrossRef] [Green Version] 21. Avrutin, E.; Ryvkin, B.; Kostamovaara, J.; Kuksenkov, D. Strongly asymmetric waveguide laser diodes for high brightness picosecond optical pulses generation by gain switching at GHz repetition rates. Semicond. Sci. Technol. 2015, 30, 055006. [Google Scholar] [CrossRef] 22. Avrutin, E.A.; Dogru, N.; Ryvkin, B.; Kostamovaara, J.T. Spectral control of asymmetric-waveguide large signal modulated diode lasers for non-linear applications. IET Optoelectron. 2016, 10, 57. [Google Scholar] [CrossRef] 23. Bhattacharya, P.; Klotzkin, D.; Qasaimeh, O.; Zhou, W.; Krishna, S.; Zhu, D. High-speed modulation and switching characteristics of In(Ga)As-Al(Ga)As self-organized quantum-dot lasers. IEEE J. Sel. Top. Quantum Electron. 2000, 6, 426. [Google Scholar] [CrossRef] 24. Cornet, C.; Labbe, C.; Folliot, H.; Bertru, N.; Dehaese, O.; Even, J.; Le Corre, A.; Paranthoen, C.; Platz, C.; Loualiche, S. Quantitative investigation of optical absorption in InAs/InP (311)B quantum dots emitting at 1.55 µm wavelength. Appl. Phys. Lett. 2004, 85, 5685. [Google Scholar] [CrossRef] 25. Hantschmann, C.; Vasil’ev, P.P.; Chen, S.; Liao, M.; Seeds, A.J.; Liu, H.; Penty, R.V.; White, I.H. Gain switching of monolithic 1.3 µm InAs/GaAs quantum dot lasers on silicon. J. Light. Technol. 2018, 36, 3837. [Google Scholar] [CrossRef] 26. Dogru, N.; Duranoglu Tunc, H.S.; Al-Dabbagh, A.M. Gain-switched short pulse generation from InAs-InP(113)B quantum dot laser excited state. Opt. Laser Technol. 2022, 148, 107709. [Google Scholar] 27. Paranthoen, C.; Bertru, N.; Dehaese, O.; Loualiche, S.; Lmabert, B. Height dispersion control of InAs/InP(113)B quantum dots emitting at 1.55 µm. Appl. Phys. Lett. 2001, 78, 1751. [Google Scholar ] [CrossRef] 28. Koenraad, P.M.; Bertru, N.; Bimberg, D.; Loualiche, S. Electronic and optical properties of InAs/InP quantumdots on InP(100) and InP(311)B substrates: Theory and experiment. Phys. Rev. B 2006, 74 , 035312. [Google Scholar] Figure 2. Output power vs. dc current without applying EOGB to the Exs, (a) $ε Exs , Grs$ = 0 (b) $ε Exs , Grs$ ≠ 0. Cavity length, L 0.245 cm Cavity width, w 12 µm Confinement factor, Γ 0.025 Quantum dot density, No 6 × 10^16 cm^−3 Refractive index, nr 3.27 Cavity internal loss, αint 6 cm^−1 Mirror reflectivity, R1, R2 0.95, 0.05 Spontaneous emission ofWly, τwr 500ps Spontaneous emission of Exs, τer 500ps Spontaneous emission of Grs, τp 1.2ns Photon lifetime, τr 8.92 ps Spontaneous coupling factor, β 1 × 10^−4 Emission energy of Wly, Ewly 1.05 eV Emission energy of Exs, Eexs 0.840 eV Emission energy of GS, Egrs 0.792 eV Phonon relaxation of Wly, Awly 1.35 × 10^10 s^−1 Auger coefficient of Wly, Cwly 5 × 10^−9 cm^3s^−1 Phonon relaxation of Wly, Aexs 1.5 × 10^10 s^−1 Auger coefficient of Exs, Cexs 9 × 10^−8 cm^3s^−1 Degeneracy of Grs, Exs, Wly, µgrs, exs, wly 2,4,10 Operating frequency, f 1 GHz Wavelength, λ 1.55 µm Homogeneous broadening, Γhom 15 meV Inhomogeneous broadening, Γihom 45 meV Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https: Share and Cite MDPI and ACS Style Tunc, H.S.D.; Dogru, N.; Cengiz, E. Gain-Switched Short Pulse Generation from 1.55 µm InAs/InP/(113)B Quantum Dot Laser Modeled Using Multi-Population Rate Equations. Mathematics 2022, 10, 4316. AMA Style Tunc HSD, Dogru N, Cengiz E. Gain-Switched Short Pulse Generation from 1.55 µm InAs/InP/(113)B Quantum Dot Laser Modeled Using Multi-Population Rate Equations. Mathematics. 2022; 10(22):4316. https:/ Chicago/Turabian Style Tunc, Hilal S. Duranoglu, Nuran Dogru, and Erkan Cengiz. 2022. "Gain-Switched Short Pulse Generation from 1.55 µm InAs/InP/(113)B Quantum Dot Laser Modeled Using Multi-Population Rate Equations" Mathematics 10, no. 22: 4316. https://doi.org/10.3390/math10224316 Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details Article Metrics
{"url":"https://www.mdpi.com/2227-7390/10/22/4316","timestamp":"2024-11-02T02:41:06Z","content_type":"text/html","content_length":"499821","record_id":"<urn:uuid:0b5338c1-ef1d-4fde-bf5b-4f7110b80afb>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00708.warc.gz"}
What Is Volume ? The volume of a solid is the amount of space inside the solid. Consider the cylinder below: If we were to fill the cylinder with water. - ppt video online download 1 What Is Volume ? The volume of a solid is the amount of space inside the solid. Consider the cylinder below: If we were to fill the cylinder with water the volume would be the amount of water the cylinder could hold: 2 Measuring Volume. Volume is measured in cubic centimetres (also called centimetre cubed). Here is a cubic centimetre It is a cube which measures 1cm in all directions. 1cm We will now see how to calculate the volume of various shapes. 3 Volumes Of Cuboids. Look at the cuboid below: 4cm 3cm 10cmWe must first calculate the area of the base of the cuboid: The base is a rectangle measuring 10cm by 3cm: 3cm 10cm 4 3cm 10cm Area of a rectangle = length x breadth Area = 10 x 3 Area = 30cm2 We now know we can place 30 centimetre squares on the base of the cuboid. But we can also place 30 cubic centimetres on the base: 10cm 3cm 4cm 5 10cm 3cm 4cm We have now got to find how many layers of 1cm cubes we can place in the cuboid: We can fit in 4 layers. Volume = 30 x 4 Volume = 120cm3 That means that we can place 120 of our cubes measuring a centimetre in all directions inside our cuboid. 6 10cm 3cm 4cm We have found that the volume of the cuboid is given by: Volume = 10 x 3 x 4 = 120cm3 This gives us our formula for the volume of a cuboid: Volume = Length x Breadth x Height V=LBH for 7 What Goes In The Box ? Calculate the volumes of the cuboids below: (1)14cm 5 cm 7cm (2) 3.4cm 490cm3 39.3cm3 (3) 8.9 m 2.7m 3.2m 76.9 m3 8 The Cross Sectional Area.When we calculated the volume of the cuboid : 10cm 3cm 4cm We found the area of the base : This is the Cross Sectional Area. The Cross section is the shape that is repeated throughout the volume. We then calculated how many layers of cross section made up the volume. This gives us a formula for calculating other volumes: Volume = Cross Sectional Area x Length. 9 For the solids below identify the cross sectional area required for calculating the volume:(2) (1) Right Angled Triangle. Circle (4) (3) A2 A1 Rectangle & Semi Circle. Pentagon 10 The Volume Of A Cylinder.Consider the cylinder below: It has a height of 6cm . 4cm 6cm What is the size of the radius ? 2cm Volume = cross section x height What shape is the cross section? Circle Calculate the area of the circle: A = r 2 A = 3.14 x 2 x 2 A = cm2 The formula for the volume of a cylinder is: V = r 2 h r = radius h = height. Calculate the volume: V = r 2 x h V = x 6 V = 11 The Volume Of A Triangular Prism.Consider the triangular prism below: 5cm 8cm Volume = Cross Section x Height What shape is the cross section ? Triangle. Calculate the area of the triangle: A = ½ x base x height A = 0.5 x 5 x 5 A = 12.5cm2 Calculate the volume: Volume = Cross Section x Length The formula for the volume of a triangular prism is : V = ½ b h l B= base h = height l = length V = 12.5 x 8 V = 100 cm3 12 What Goes In The Box ? 2 Calculate the volume of the shapes below: (2)(1) 16cm 14cm (3) 6cm 12cm 8m 2813.4cm3 30m3 288cm3 13 Summary Of Volume Formula.h V = r 2 h l b h V = l b h b l h V = ½ b h l
{"url":"https://slideplayer.com/slide/3620995/","timestamp":"2024-11-14T21:08:14Z","content_type":"text/html","content_length":"171216","record_id":"<urn:uuid:bfcadc95-7e53-4f9c-af81-3c65f319c6e3>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00835.warc.gz"}
Synthesis and Enzymatic Studies of Selenium Derivatized Nucleosides, Nucleotides and Nucleic Acids Physics and Astronomy Dissertations Department of Physics and Astronomy Photoionization of the Be Isoelectronic Sequence: Relativistic and Nonrelativistic R-Matrix Wei-Chun Chu Georgia State University Follow this and additional works at:https://scholarworks.gsu.edu/phy_astr_diss Part of theAstrophysics and Astronomy Commons, and thePhysics Commons This Dissertation is brought to you for free and open access by the Department of Physics and Astronomy at ScholarWorks @ Georgia State University. It has been accepted for inclusion in Physics and Astronomy Dissertations by an authorized administrator of ScholarWorks @ Georgia State University. For more information, please [email protected]. Recommended Citation Chu, Wei-Chun, "Photoionization of the Be Isoelectronic Sequence: Relativistic and Nonrelativistic R-Matrix Calculations." Dissertation, Georgia State University, 2009. Under the Direction of Steven Manson The photoionization of the beryllium-like isoelectronic series has been studied. The bound state wave functions of the target ions were built with CIV3 program. The relativistic Breit-Pauli R-matrix method was used to calculate the cross sections in the photon energy range between the ionization threshold and 1s24f7/2 threshold for each ion. For the total cross sections of Be, B+, C+2, N+3, and O+4, our results match experiment well. The comparison between the present work and other theoretical works are also discussed. We show the comparison with our LS results as it indicates the importance of relativistic effects on different ions. In the analysis, the resonances converging to 1s22lj and 1s23lj were identified and characterized with quantum defects, energies and widths using the eigenphase sum methodology. We summarize the general appearance of resonances along the resonance series and along the isoelectronic sequence. Partial cross sections are also reported systematically along the sequence. All calculations were performed on the NERSC system. A Dissertation Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy in the College of Arts and Sciences Georgia State University Copyright by Wei-Chun Chu Committee Chair: Steven Manson Committee: Vadym Apalkov William Nelson Brian Thoms Paul Wiita Electronic Version Approved: LIST OF TABLES viii LIST OF FIGURES x 1 INTRODUCTION 1 1.1 Photoionization 1 1.2 Developments in Experiment 1 1.3 Developments in Calculations 3 1.4 Focus of Present Work 5 2 THEORY 7 2.1 Photoionization Theory 7 2.1.1 Basic process 7 2.1.2 Cross section 8 2.1.3 Resonances 10 2.2 Nonrelativistic R-Matrix Theory 12 2.2.1 Equation of motion 12 2.2.2 N-electron states 13 2.2.3 The ejected electron 15 2.2.4 Internal region 17 2.2.5 Continuum orbitals 18 2.2.6 R-matrix 20 2.2.8 External region 23 2.2.9 Open channel solutions 25 2.3 Radiative Process 27 2.3.1 Closed channel solutions 27 2.3.2 Dipole matrices 28 2.4 Breit-Pauli R-matrix Theory 31 3 METHOD OF CALCULATIONS 35 3.1 CIV3 Program 35 3.1.1 Introduction 35 3.1.2 The computational setup 37 3.2 RMATRX Program 39 3.2.1 Module STG1 39 3.2.2 Module STG2 40 3.2.3 Module STGH 41 3.2.4 Module STG4 41 3.3 QB Program 42 4 RESULTS 46 4.1 Energy Levels 46 4.2 Ground and Metastable State Cross Sections 47 4.3 Cross Sections Compared with Experiments 49 4.3.1 Be 49 4.3.2 B+ 50 4.3.4 N+3 51 4.3.5 O+4 52 4.4 Cross Sections Compared with Other Calculations 53 5 ANALYSIS 79 5.1 General Appearance of Cross Sections 79 5.2 Relativistic Effects 81 5.3 Resonance 84 5.3.1 Identification of resonances 84 5.3.2 Quantum defects of resonances 87 5.3.3 Perturbation and overlapping of resonances 88 5.4 Partial Cross Sections 91 6 CONCLUSIONS 135 REFERENCES 138 Table 4.1 Energy levels of the target ions. 58 Table 4.2 [Binding energies for the ] e 0 1 S state. 59 Table 4.3 [Binding energies for the ] o 0 3 P state. 59 Figure 2.1 The photoionization process 33 Figure 2.2 The basic idea of R-matrix theory 33 Figure 2.3 The Fano profile with different q values 34 Figure 3.1 The flowchart of the calculations 45 Figure 4.1 The ground state photoionization cross sections 60 Figure 4.2 The metastable state photoionization cross sections 61 Figure 4.3 The photoionization cross section of Be compared with experiment 62 Figure 4.4 The photoionization cross section of Be compared with experiment 63 Figure 4.5 The photoionization cross section of B+ compared with experiment 64 Figure 4.6 The photoionization cross section of C+2 compared with experiment 65 Figure 4.7 The photoionization cross section of N+3 compared with experiment 66 Figure 4.8 The photoionization cross section of O+4 compared with experiment 67 Figure 4.9 The total photoionization cross section of Be compared with OP 68 Figure 4.10 The total photoionization cross section of Ne+6 compared with OP 69 Figure 4.11 The total photoionization cross section of Ar+14 compared with OP 70 Figure 4.12 The total photoionization cross section of Fe+22 compared with OP 71 Figure 4.13 The photoionization cross section of Be compared with Kim’s 72 Figure 4.14 The photoionization cross section of B+ compared with Kim’s 73 Figure 4.15 The photoionization cross section of C+2 compared with Kim’s Figure 4.16 The photoionization cross section of C+2 compared with Nahar and Pradhan’s calculation 75 Figure 4.17 The photoionization cross section of N+3 compared with Nahar’s 76 Figure 4.18 The photoionization cross section of O+4 compared with Nahar’s 77 Figure 4.19 The photoionization cross section of C+2 compared with Pradhan et al’s 78 Figure 5.1 The splitting between thresholds with different l 94 Figure 5.2 Photoionization cross sections of Ne+6 in BP and LS calculations 95 Figure 5.3 Photoionization cross sections of S+12 in BP and LS calculations 96 Figure 5.4 Photoionization cross sections of Fe+22 in BP and LS calculations 97 Figure 5.5 Photoionization cross sections of ground state Fe+22 in BP and LS calculations near 3s3p and 3p3d resonances 98 Figure 5.6 Photoionization cross sections of metastable state Fe+22 in BP and LS calculations near 3s3s and 3s3d resonances 98 Figure 5.7 The splitting between thresholds with different j 99 Figure 5.8 The widths and the positions of the first few photoionization resonances for Fe+22. 100 Figure 5.9 The evolution with Z of the photoionization cross sections near ν =3.0 with respect to T3s Figure 5.10 The asymptotic quantum defects of the 2pjnlj′ resonances 130 Figure 5.11 The photoionization cross sections near the 4s4p resonance in B+, C+2, and N+3 131 Figure 5.12 Partial cross sections for ground state Fe+22 132 Figure 5.13 [2][s] just above the 4f 7/2 threshold 133 The study of atomic processes in ionized matter is of great importance since ions compose not only the biggest part of the universe, but also occur in many laboratory settings. Since the early years of quantum mechanics, theory and experiment in atomic collisions and spectroscopy form a very critical part of the examination of the properties of matter. Being the core process of many physical and chemical reactions, photoionization and its reverse process, electron-ion recombination, occur in numerous astronomical objects, and other physical systems. Owing to improvement in experimental techniques and computational power, a lot of experimental and theoretical data have been generated in the past ten years. For experiment, the merged-beam method broadens the choice of target ions and increases the accuracy of absolute cross section measurements. It is used for absolute cross section measurements in most major laboratories in the world today. Also, in synchrotron development, third-generation light sources have improved the measurements by raising the light intensity and the photon energy range since the early 90s. On the calculational side, different approaches are continuously being tested with the rapid growth of computational speed in the modern day computers. With the parallel development of experiments and calculations, a comprehensive comparison and analysis can be carried out. More details of these experiments and calculations will be reviewed in the next sections. 1.2.Developments in Experiment more than 50 synchrotron facilities over the world are in operation for research in chemistry, physics, materials science, biology and other fields. These continuous and polarized light sources have energies ranging from the IR to the X-ray region. Synchrotrons use magnetic fields to accelerate the charges to generate radiation. After World War II, the first generation machines were used in fundamental particle physics studies, and the theory and design of synchrotron was well understood. Third-generation synchrotron light sources came into operation nearly twenty years ago. They were designed to provide reliable light sources with a wide range of photon energy, high intensity, and continuous operation. The most important features of modern synchrotrons are the storage ring and the undulator. A storage ring is a closed track for the electron beam to run as many times as possible in order to greatly reduce the power required. With a high quality vacuum chamber in the storage ring, the lifetime of the beam is from 5 to 100 hours. An undulator is a magnetic device that generates sinusoidal magnetic field along the trajectory of the electron beam. The spatial period of the field is determined relativistically to relate the electron speed to the radiation frequency. The third generation machines have significantly increased the brilliance (number of photons emitted per unit time, per unit photon energy, per unit solid angle, per unit source size) from the previous ones. Among these modern facilities, the Advanced Light Source (ALS) in USA, ASTRID in Denmark, the Photon Factory and SPring-8 in Japan, and SuperACO in France have been the sites of large scale cross section measurements. cross section measurements. The application on photoionization, which was first used by Lyon et al [4], aligns the target ion beam and the photon beam for some distance as the interaction region. This effectively compensates for the usually low density of ions and the limited light intensity which are common in absolute cross section measurements. Thus, this method can be applied to many more charge-states than other methods in almost all elements, and the absolute cross section measurement can be made since the ion density can be determined. West discussed the impact and improvement that this technique has brought to absolute cross section measurements in a review paper [5]. Lyon's measurement, described above, along with his following works at Daresbury in 1986-1987 [6][7], were the first applications of this method in ions. Later Koizumi et al [8] at Photon Factory in Japan and ASTRID storage ring at University of Aarhus in Denmark also did the measurements using merged-beam method and it is generally used now. 1.3.Developments in Calculations To calculate atomic processes, an accurate description of the wave functions of the system is required. For the discrete atomic wave functions, calculation methods such as the Hartree-Fock (HF) method [9][10] have been developed, and many computational packages are available. For atomic processes involving continuum states, like electron-atom collision or ionization, there are various methods with different advantages and disadvantages. phase approximation (RRPA) was developed by Johnson and Lin [13], with which the photoionization parameters in Ar, Kr and Xe were presented by Huang et al [14]. Multiconfigurational relativistic random phase approximation (MCRRPA) by Huang and Johnson [15] was carried out later along this track. Similar to RPA, many body perturbation theory (MBPT) has the same radiation term in the Hamiltonian, but while the radiation contains only the first order, the specified electron correlation expands to all orders. Based on the fermionic many-body theory by Goldstone [16], the formalism of MBPT was done by Kelly [17][18][19], and various examples are shown in his review paper [20]. Double-photoionization was enabled based on this method in the work of Chang et al [21]. 1.4.Focus of Present Work The present study focuses on the calculation of the photoionization cross sections of Be-like isoelectronic ions and the analysis of the data obtained. This isoelectronic sequence is chosen due to the simple ionic form and the importance in astrophysics. The theoretical energy levels of target ions and initial ions are compared with the NIST values to ensure the quality of the wave functions used. To estimate the accuracy of our total cross sections, our theoretical results are compared with the available experimental results which exist for Be, B+, C+2, N+3, and O+4 ions. The ground state and metastable state partial cross sections are separately compared with other theoretical results and are analyzed. 2. THEORY 2.1.Photoionization Theory In this section we present the theoretical approach to the most measured quantity in photoionization, the cross section, by considering the atomic system quantum mechanically in the presence of the electromagnetic radiation. This is a common starting point for all numerical calculation methods. We also emphasize the role of resonances as important in the photoionization phenomenon. Some detailed mathematical derivations are relegated to Appendix A. We use Gaussian units in this section. The general theory of photoionization cross section is reviewed and discussed in details by Burke [36] and by Amusia [37]. Time-dependent perturbation theory in quantum mechanics is well described by Merzbacher [38] and by Sakurai [39]. 2.1.1. Basic process The single photoionization process, which involves one incident photon and one ejected electron, is described by − + [+] +h j e i A In some photon energy ranges, the photoionization can proceed either directly to the continuum ionized state, or go through an intermediate excited state, a resonance, which is described by − + [+] → → +h *[k] [j] e i A A (2.2) where A* stands for the intermediate excited state. Figure 2.1 shows this process schematically. The delay process from the excited state, A* , to the ionization is called autoionization. The interference of these two routes is characterized by a resonance profile in the photoionization cross section, which will be discussed in Subsection 2.1.2. Cross section The general definition of total cross section, , for scattering is given by particles incident of Flux scatterer per unit time per events of . (2.3) In the single photoionization of an atom or molecule, which is our current focus, it is equivalent to write field radiation of flux Energy unit time per absorbed Energy . (2.4) In the theoretical approach, the initial system of an atom or ion with N+1 electrons is in a specific eigenstate of the (N+1)-electron Hamiltonian, and the radiation field is described by a plane wave with frequency ω. The energy flux cU (U is energy density) of a plane electromagnetic wave is + = 0 B E c where E and 0 B are amplitudes of electric field E0 and magnetic field B , respectively. In terms of the vector potential A r , with A B t A c E r r r r r × ∇ = ∂ ∂ − = 1 , (2.6) the energy flux is given by 2 c A cU = (2.7) where the vector potential is ( ) ikr i t e A t r A = ⋅ −ω r r r , [0] , (2.8) where ˆ is the unit vector along the direction of the vector potential. Now we assume that the initial state of the system is a discrete state i with total energy E , and after the absorption of photon energy hω, the final state j is continuum with energy E . The normalization of i and j are ii′ = ′ and j j ′ = , so the dimension of j has an extra factor of 1Energy to the dimension of i . With the transition probability rate W[i][→][j], Eq. (2.4) gives σ = , which in turn gives (see Appendix A for details) 2 2 2 2 2 ˆ 4 i D j c m e V V r h ⋅ = ε ω π σ (2.9) in velocity form and 2 2 2 ˆ 4 i D j c e L L r ⋅ = π ω ε in length form, where m is electron mass. The dipole velocity operator DV and the dipole length operator D[L] are defined in Eq. (A.15) and Eq. (A.18) respectively in Appendix A. In atomic units, the Bohr radius is a[0] =h2 me2, the fine structure constant is 2 h , and the energy is measured in the units of e2 a[0]; Eq. (2.9) and Eq. (2.10) are reduced to (remember V D , D[L] r , and j need to be in atomic units) ˆ 4 i D j a V V r ⋅ (2.11) and ˆ 4 a j D[L]i r ⋅ = (2.12) where is the photon energy in atomic units. For the exact wave functions, are identical [40]. However, for many electron systems, exact wave functions are not possible. Thus, the initial state and the final state of the system are described by expansions of a basis set. In such a case, the comparison of can indicate the quality of the approximate wave functions employed. 2.1.3. Resonances The analysis of resonances developed by Fano [41] shows that the shape of a resonance in the cross section can be expressed by 1 ε ε σ + + + = A q (2.13) where Γ − = E Er ε , (2.14) Er is the resonance energy and Γ is the resonance width. Figure 2.3 shows the function 1 ε ε + q with q=0, q=1, and q=2. When applying this picture to ( ) , the parameter Er defines the center of the peak, and Γ defines the scaling in E. The Fano profile is usually used in characterizing the resonance by fitting the function Eq. (2.13) to the data to get the parameters. It is also shown in Ref. [41] that under the condition where only one discrete (resonance) state and one continuum state are presented, with the normalizations ' ' E E E H V H E H E E E E n nn n n n − = = = where H is the total Hamiltonian, Γ is determined by Γ=πV[E] 2 calculated at the resonance energy Er. Thus, Γ is considered a measure of the strength of the interaction between the discrete and the continuum at the resonance energy. If the system is prepared in the combination of the discrete (resonance) state ϕn and the continuum state , the mean lifetime for autoionization 2.2.Nonrelativistic R-Matrix Theory This section is based on the theory described by Burke et al [42], Burke and Taylor [43], Scott and Burke [44], and Berrington et al [45]. We also follow the notations used in these papers here. 2.2.1. Equation of motion Photoionization of an isolated (N+1)-electron atomic system can be written generally as − + + + [+] [→] [+] e h (n ) n 1 A ν (2.16) where the initial atomic system An+ (n=0 for neutral atom) has N+1 electrons and the final atomic system includes an N-electron residual A(n+1)+ (also called target state) and a scattered electron (also called a photo-electron). The system can be described by a time-independent total wave function, which is the solution to the time-independent Schrödinger equation Ψ = Ψ HN 1 . (2.17) where E is the total energy. The nonrelativistic Hamiltonian HN+1 is written in cgs unit as > + + − ∇ − = 1 i j ij i i N r e r Ze m H h , (2.18) in which the one-electron part includes the kinetic energy and the Coulomb potential, and the two-electron part is the electromagnetic interaction between any two electrons; r is the i distance from nucleus to the ith electron and rij is the distance between the ith electron and the the unit of HN+1 and Bohr radius (a = [0] h2 me2 = 5.29177×10−11 m) as the unit r, now and throughout the chapter, and rewrite Eq. (2.18) in the form of = + > + + − ∇ − = 1 1 1 2 1 1 2 1 N i N i j ij i i N r r Z H . (2.19) Since the system is spherically symmetric, it is convenient to adopt spherical coordinate to describe it. In spherical coordinates, the two-electron term in the Hamiltonian is ( ) ( ) + > < + = lm l l j m l i m l ij r r r Y r Y l r 1 * ˆ ˆ 1 2 4 1 π (2.20) where are smaller and larger ones of r and [i] r[j] , respectively. 2.2.2. N-electron states Since only single-electron ionization is considered here, it is reasonable to focus on the first N electrons, which form the target states, before the scattered electron is included. The eigenstates of this N-electron system is characterized by (now i and j are the indices of different eigenstates instead of electron numbers) N i ij j N i H Φ = Φ (2.21) where the eigenstates Φ[i] correspond to N-electron energies E[i]N. Any bound target state is a linear combination of the Φi. To construct these wave functions, we start by looking at the single-electron (bound) atomic orbitals as the functions of position rr and spin state ms: ( ) ( ) ( ) m l nl s nlm P rY r m r m r The orbitals onlm are the hydrogenic eigenfunctions with n the principle quantum number, l the angular momentum quantum number with m the z component, respectively. The radial part Pnl ( ) is restricted by the orthogonality condition n n l n nl P P ['] =δ [′] (2.23) which is required by the orthogonality of the basis set o[nlm] ' ' ' 'lm nn ll mm n nlm o o =δ [′]δ δ . (2.24) The optimization of radial functions Pnl ( ) r is done prior to utilization of R-matrix program and carried out by other programs such as CIV3 or SUPERSTRUCTURE. The details of the CIV3 results in the present work will be presented in Chapter 3. We shall call Pnl ( ) ‘bound orbitals’ as distinguished from 'continuum orbitals' that we will encounter later. Now we define the N-electron configurations ( ) ( ) P P N N N k e o x o x N x x[1]L [1] [1] L ! 1 (2.25) where the summation is over all interchanges of the electron indices with the correct permutation symbol . The are totally antisymmetric with respect to interchange of particles. Each k in Eq. (2.25) indicates a different set of , where each is a member of the set nlm r m o r, in Eq. (2.22), with x as the combined coordinate of position rr and spin Theoretically, k can be infinite since the number of hydrogenic orbitals is infinite, but practically we have to limit the number of configurations to make the calculation feasible. For example, in helium, can be { } s 1 s 2 p 2 ,… in 1 S symmetry, and the are orthogonal since the orbitals o are orthogonal, and with infinite k they form a complete basis set. These configurations are suitable to be the basis set of the N-electron wave functions, and the configuration-interaction (CI) expansion of Φ[i] is N k ik N i x1,L,x b . (2.26) With the same basis set, the N-electron Hamiltonian is N kk k ' = , (2.27) and Eq. (2.21) is equivalent to the diagonalization of 2.2.3. The ejected electron When the ejected electron is added to the target wave function to complete the wave function for the (N+1)-electron system, some requirements must be kept in mind. First, the total angular momentum must be conserved. To yield a specific total angular momentum, there might be a few different ways to couple the target angular momentum and the ejected electron angular momentum. These different pairs of the target states Φ[i] and the ejected electron wave functions are called the scattering channels. For example, a 2p target state (l=1) and an s-wave ejected electron (l=0) are coupled an L= 1 state, and a 2s target state and a p-wave ejected electron can also form a L=1 state, but they are different channels. + = = +1 N i N i E k E E (2.28) or i N i E E k − + = +1 (2.29) to show the dependence of k on the photon energy. The channel is said to be “open” if [i] > − N E or “closed” if E−EiN <0. − =0 N i E simply means the total energy is just the ionization threshold energy. R-matrix theory is characterized by the partition of configuration space [22]. Figure 2.2 sketches this partitioning. Let there be a spherical shell with radius a centered at the nucleus of the atomic system. This shell is designed to largely enclose all the bound state wave functions. It assumes that outside the shell, the bound state wave functions vanish, and there exists solely the continuum ejected electron. With a chosen to meet this condition, the inner region and the outer region are described as follows: 1) For r <a, the system contains N+1 indistinguishable electrons. The exchange term between any two electrons in the Hamiltonian must be included as in Eq. (2.19). The target wave functions Φ[i] and the scattered electron are coupled totally-antisymmetrically. The final state wave functions are expanded in terms of configurations in a manner similar to Eq. (2.26). 2) For r>a, the (N+1)-electron system is viewed as a two-body system. The N-electron system is replaced effectively by a central potential centered at r=0. The ejected electron then is under a local potential and can be solved with an asymptotic expansion. functions in these two regions will be discussed in the later sections. In practice, a is determined to be sufficiently large so as ( ) r r a P[nl] < if > (2.30) for all the bound orbitals ( ) in use. 2.2.4. Internal region In the internal region, the system consists of N+1 indistinguishable electrons. The total wave function is an (N+1)-electron antisymmetric function with all the exchange terms. All the bound state wave functions are confined in the inner region, which means they drop to zero at the boundary r=a. In order to obtain the wave function in this region for any energy E, an energy-independent basis are built as ) ( ) + + + + = Φ + N j ij N N ij N N N i ijk N k d x x r r u r x x c x x 1 1 1L L ;ˆ , (2.31) in which we demand to be eigenfunctions of H with eigenvalues E in the [k] defined region. For each channel i, the function Φi is the target state wave function Φi coupled with its corresponding angular term and spin term of the scattered ensure the completeness of this basis set. Similar to in Subsection 2.2.2, j x Lx φ are formed with the single-electron orbitals. To make it a more compact form, Eq. (2.31) can be written as λ λ λ ψk Vk (2.32) where V[k][λ] are the collection of the coefficients c[ijk] and d[jk], and ϕ[λ] are the collection of the basis functions in Eq. (2.31). The Hamiltonian HN+1 is written in the ϕ[λ] basis as ' ' 1 + + [=] N H [λ] [λλ] λ ϕ ϕ (2.33) where the round brackets indicate that the range of integration is the inner region 0≤r≤a. The coefficients V of k[λ] ψk are determined by diagonalizing 1 + H , kk k k N k H ψ Eδ ψ + = [. (2.34) ] The functions ψk are then the eigenfunctions of 1 + H described by eigenvectors V with eigenvalues E , and are suitable to be the basis set for the total wave function in the [k] inner region. 2.2.5. Continuum orbitals For the continuum functions u[ij] ( ) r in Eq. (2.31), i is the channel label, which is associated with angular momentum l , and for each i, j is the label of a discrete set of solutions. [i] Theoretically, as long as uij ( ) form a complete set of basis that satisfies the boundary The appropriate continuum functions uij ( ) r in channel i are usually determined by solving the equation [( )] [( )] [( )] + + + − n nl ijn ij ij i i r P r u k r V r l l r i 2 0 2 2 2 1 d d (2.35) with the boundary conditions ( ) ( ) ( ) b r a u a u a u a r ij ij ij = = = d d 0 0 . (2.36) In Eq. (2.35), the summation of indices n is over all atomic orbitals of angular momentum l in i the bound state expansion. The Lagrange multipliers Λijn are chosen to meet the orthogonality nl ij P u (2.37) for all { } . Notice here the range of integration is from =0 to , as indicated by the round bracket. Since the solutions ( ) also satisfy ′, (2.38) the atomic and continuum orbitals together ( ) ( ) ( ) ( ) [,] [1] [,] [2] [,] L max min r P r u r u r Pn l[i] n l[i] i i (2.39) form a complete basis set in the region 0≤r≤a. For the potential V0 ( ) in Eq. (2.35), in principle, the choice is arbitrary but will have an effect on how fast the expansion converges. In our case, we choose V0 ( ) to be the average static potential viewed by the ejected electron. we see that u[ij] ( ) are solutions to a eigenvalue equation with eigenvalues 2 2, and are totally independent of the total energy E. 2.2.6. R-matrix Now it comes to the stage to build, associated with the given total energy E, the wave function Ψ in the internal region and the R-matrix to connect the wave function through the internal and external region. The wave function is described as = Ψ k k Ek (2.40) in the basis which is constructed by Eq. (2.35). In order to find A , we put Eq. (2.17), Eq. [Ek] (2.34) and Eq. (2.40) together to form ) ( N N k H Ψ − ΨH = E−E Ψ = E−E A + [ψ] [ψ] ψ 1 1 . (2.41) Within N+1 H , since the potential energy operator commutes with the position operator rr and operates equally from the right and from the left on any wave function of position, the potential energy part on the left hand side of Eq. (2.41) vanishes, and only the kinetic energy part stands out. Thus, this relation is rewritten as ) ( − Ψ∇ N k − ψ [+] 2[+] ψ , (2.42) where ∇N+1 acts on rN+1 . In Eq. (2.42), only the continuum orbitals contribute to the non-zero part on the left hand side, so using Eq. (2.31) to define ( ) ( ) i k ij ijk ik r c u r r w = = Φ ψ , (2.43) we further simplify Eq. (2.42) to ( ) ( ) ( ) ( ) ijk N ik i N N jk j N jk j N N ik i Ek w r w r w r w r E E A A Φ ∇ Φ − Φ ∇ Φ = − Since Φi are orthogonal functions, only i= j terms on the left hand side survive, and Eq. (2.44) becomes ( ) ( ) ( ) ( ) Ek i ik i ik w r E E A r r F r F r r w = − d d d d 2 1 (2.45) where the dummy variable r[N][+][1] is replaced by r, and F[i] ( ) , the reduced radial wave function of the ejected electron in channel i at energy E is defined by ( ) ( ) ik Ek i r A w r r F . (2.46) With Green's second identity, which reads (for arbitrary second-order continuous functions ( ) f[1] and f[2] ( ) 1 2 1 2 1 2 1 2 1 2 1 d x x x x f f ′′− f ′′f x= f f′− f′f (2.47) and the boundary conditions in Eq. (2.36), we convert Eq. (2.45) to ( ) ( ) ( ) ( Ek i i i ik F a E E A a b a F a w = − − ′ − , (2.48) which gives the expression for the A where we have used [Ek] F ' ( ) as the abbreviation of ( ) r rr a F d [=] d . Its expression is simply ( ) ( ) ′ − ( ) i i i ik k Ek w a aF a bF a E E a A 2 1 . (2.49) Plugging in these A back to Eq. (2.46), we get [Ek] ( ) ( ) ( ) ( ) j j i a R E aF a bF a F ' (2.50) where the R-matrix is defined by where wik ( ) E are determined by the solution of k k in Eq. (2.34), and E is the total For each set of the conserved quantum numbers (total angular momentum L, total spin S, and parity π), the (N+1)-Hamiltonian is diagonalized once, and Rij ( ) E as a function of E is obtained. The set of scattered wave functions at r=a are solved for using the coupled equations, Eq. (2.50). With these F[i] ( ) , we obtain through Eq. (2.49), and the wave function in the internal region is done. 2.2.7. Buttle correction In a practical calculation we only take finite terms in the expansion Eq. (2.32). For the omitted terms, even if each single term is small when Ek is far from E, they may add up coherently and make a considerable effect. This brings the main error to the wave function. Now consider the equation [( )] [( )] d 2 0 0 2 2 2 = + + + − V r k u r r l l r i i i i , (2.52) which is similar to Eq. (2.35), but kij2 is replaced by k where the ki2 2 are the channel energies (defined in Sec. 2.2.3). Suppose we truncate the expansion of R-matrix after the first N terms, then the correction, according to the method described by Buttle [46], is ( ) ( ) ( ) ( ) = − ∞ + = − − − = − ≈ N N 1 2 2 2 1 0 0 1 2 2 2 1 ' 1 j ij i ij i i to the diagonal elements of R-matrix. Here uij ( ) are the solutions to the eigenvalue equation Eq. (2.35) satisfying the boundary conditions Eq. (2.36). Adding this correction to R-matrix, Eq. (2.51) is rewritten as ( ) ( ) ( ) c ii k k jk ik ij R E E a w a w a E R + . (2.54) c ii R is often a simple continuous function of ki2 when ij i k k < . Since the number of required terms in the correction is usually large, we can fit this function to a few ki2 to fix the form of the function and estimate all the terms. Seaton developed the fitting process that we use in R-matrix calculation [47]. 2.2.8. External region Here we turn to the wave function in the external region. In this region the scattered electron is distinguishable from the first N electrons that stay with the nucleus. The total wave function is expanded in the form of ) ( ) + + + + = Φ Ψ i N N i N N N i N r r F r x x x x 1 1 1 1 1 1 1K K ;ˆ σ (2.55) where Φi are the same channel functions in Eq. (2.31), and the Fi ( ) +1 are the corresponding reduced radial wave functions of the scattered electron. In this form we omit the antisymmetry operator A[ to exclude the exchange terms between scattered electron and any bound electron. ] Plugging in this total wave function into Schrödinger equation Eq. (2.17), we get the equation for the functions F[i] ( ) [( )] [( ) ( )] + + + − j j ij i i i i r F r V r F k r z r l l r 2 2 1 d d 2 2 2 2 where the summation over j covers up to the number of channel functions Φ[i] in use, k[i]2/2 are the channel energies, z≡Z−N is the effective charge of the target, r≡rN+1 while the condition r>r[m] is valid in the whole external region, and with the expansion of Eq. (2.20), ( ) V[ij] is given by ( ) ( ) ( ) Φ Φ = Φ + Φ = Φ Φ = = + + = + + = + l j N n N n l l n i l j N n lm N m l n m l l l n i j N n nN i ij P r r r Y r Y r r l r r V 1 1 , 1 1 1 * 1 1 , 1 cos 1 ˆ ˆ 1 2 4 1 Note that in theory the expansion contains infinite l terms, but here we include only up to some maximum l value specified by the program user to make the calculation feasible. Defining the long-range potential coefficient a[ij]l as N n N n l l n i l ij r P a = Φ =1 + 1 , , (2.58) Eq. (2.56) is reduced to [( )] [( )] + = + + + − l j j l l ij i i i i r F r a r F k r z r l l r 1 2 2 2 2 2 2 1 d d (2.59) which can be integrated outward starting from and fitted to the asymptotic form at ∞ → r . Suppose we have n total channels and no open channels in the calculation, and we order no first in the n channels so that 2 1 2 1 kn kn kn o ≥ ≥ ≥ ≥L + L . Let us extend Fi ( ) to the double-index F[ij] ( ) r where the additional index j is for the n o linearly independent solutions. ( ) ( ) < − > + = ∞ → channels) (closed 0 exp channels) (open 0 cos sin 1 2 2 i i ij i i ij i ij i r ij k k K k r F where the n[o]×n[o] reactance matrix Kij (K-matrix) is to be determined when we apply the connection between internal and external wave functions through R-matrix, and other parameters are defined by ( ) k r k z r k k z i l r k l r k i i i i i i i i i i i i i 2 ln 1 arg 2 ln 2 1 − = − = + + Γ + − − = Note that the parameters are not related to the configuration functions that we defined earlier. Now the wave function in the external region will be done by solving Eq. (2.59) and boundary conditions in Eq. (2.60) once we get Kij 2.2.9. Open channel solutions In this section, we find the total wave function with the reduced radial functions Fij ( ) satisfying the boundary conditions in both the internal and external region. If we change to matrix format and use a dot as the abbreviation of the derivative of r (df /dr= f& ), Eq. (2.50) is expressed in the form of F R -F R F =a ⋅ & b ⋅ (2.62) which gives the values of the reduced radial functions F at r=a, where R is an n×n matrix. We now introduce n+n[o] linearly independent solutions of Fij in the external region ( ) ( ) [( )] + = = − = = = = = = ∞ → ∞ → ,n n ,n j i ,n ,n j i r c ,n ,n j i r s i ij i ij r ij i ij r ij 1 1 exp 1 1 cos 1 1 sin o o o in which [i] are defined in Subsection 2.2.8. The solutions s and c can be obtained straightforwardly, and there are a few available numerical packages for these solutions in the market. F is a linear combination of s and c: cK s F = + (2.64) and its first derivative is K c s F& = &+ & . (2.65) With these expressions, Eq. (2.62) becomes s cK s cK s+ =a &+ & −b + [, (2.66) ] and the solution for K is A B K = −1 (2.67) where the matrices A and B are − + − = s R s s a b a & (2.68) and − − + = c R c c a b 2.3.Radiative Process In this section, we consider the interaction between a photon with specified energy and an atomic system. The atomic system in this case is described by the (N+1)-electron wave function discussed 2.3.1. Closed channel solutions When all channels are closed, the general form of the wave function in the internal and the external region, as shown in Subsection 2.2.6 and Subsection 2.2.8, stay the same, but the boundary conditions change, thus changing the matching of the solutions. The external wave function has to satisfy the boundary conditions for c[ij] ( ) but not ( ) in Eq. (2.63), to satisfy the conditions ( ) ( ) i ,n j ,n c [ij] [i] ij = exp − =1 =1 → δ φ (2.70) holds the same definition. F is then given by the expansion of c as cx F = (2.71) where x replaces K as the coefficients of F. Eq. (2.66) then gives 2.3.2. Dipole matrices In order to perform the dipole approximation calculation that was discussed in Section 2.1, we use the dipole matrix involving the initial and final state wave functions as the formalism to calculate the photoionization. We introduce the dipole length and velocity operators as n n L r D r (2.73) and − = n n V r r (2.74) where the summation of n is over all electrons. Using the convention of Fano and Racah [48], we introduce the reduced dipole matrix ( ) a b D , between state a and state b as ( ) a a l b b b a b a b a L M D L M M lL L C L L D L b a D µ , = = + (2.75) The normalization of the bound states is ' nn n nΨ =δ Ψ , (2.76) and the normalization of the free state is ' E E E Ψ = − Ψ δ . (2.77) We divide D , ( ) a b into two terms as ( ) [a] [b] [D] ( ) [( )] [a] [b] [D] ( ) [( )] [a] [b] D , = I , + O , (2.78) where D( )I ( ) is the contribution from the internal region and ( )O ( ) from the external region. Now we discuss them separately. Suppose there are two wave functions Ψ[α] and Ψ[β] defined in the internal region. With the expansion of ΨE of Eq. (2.40), the dipole matrix for the internal region is ( ) ( ) = Ψ Ψ = ' ' ' I , kk kk k k I M A A D D β α β α β α (2.79) where Mkk' is defined by ( ) k I k kk D M =ψ ψ . (2.80) The coefficients A[Ek] can be written in matrix form, which we will apply shortly, by = O O Ek E E [A] A 0 0 1 A (2.81) where k is the element index and E is a parameter of the matrix. With expansion of in Eq. (2.32), we can further write = ' ' ' ' ' λλ λ λ λλ D V V M[kk] [k] [k] (2.82) where the elements of reduced matrix D are ( ) ' λ λ D I D = . (2.83) If the constant b introduced by the boundary condition Eq. (2.36) is set as 0 (which is a common setup in practice), the coefficients, A , are [Ek] ( ) ( ) where the superscript “T” is transpose, and where we have plugged in F R F'=1 −1 a (2.85) as the matrix form of Eq. (2.50) (with b=0), and the parameter E is labeled on each matrix to distinguish A at different energies. Now we introduce the diagonal matrix [Ek] G[E] with diagonal elements (with index k) E E k Ek = [−] , (2.86) Eq. (2.79) is in the final matrix form as ( ) [( )] β β β β α α α α w G MG w , = − − D (2.87) b. D( )O Now let us focus on the length operator. Since in the external region, the exchange terms between the photoelectron and the target electrons no longer exist, we divide the D r operator as r R D r r r + = where R is responsible for the target wave functions and rr for the photoelectron. Its matrix elements between state α and state β are then ( ) [( )] ) ( ' ' ' ' ' O , ii b i ia ii b i ia ii F F y F rF (2.88) where ' ' ' ˆ [i] i ii i i ii r y R x Φ Φ = Φ Φ = r . (2.89) The coefficients xii' are non-zero only when the transition between the target states is permitted two channels are built by the same target state, and when li =li'±1. The evaluation of Eq. (2.88) is described by Seaton [49]. 2.4.Breit-Pauli R-Matrix Theory The fully relativistic Dirac equation of motion can be approximated by the Schrödinger equation with the relativistic correction terms. The Breit-Pauli Hamiltonian of the system is expressed as BP H H H = + (2.90) where H has been fully discussed (as HN+1 for (N+1)-electron system) in Section 2.2. In the RMATRX1 code, H[REL] contains the corrections up to the order of α2Z4, which makes SO D1 mass REL H H H H = + + (2.91) where ∇ − n n H 4 mass-correction term (2.92) ∇ − n n n r Z H 1 one-body Darwin term (2.93) n n n n r s l Z H [3] r r α spin-orbit interaction (2.94) in which the summation of n covers all the electrons in the system. Each one of the three terms can be switched on or off optionally in the program. In these terms, H mass and HD1 commute defined by J and π instead of L, S and π. That is to say, each Jπ-symmetry of the (N+1)-electron system goes through an independent run in the program. In the Breit-Pauli R-matrix (BPRM) program, the Hamiltonian in Eq. (2.33), the long-range potential coefficients in Eq. (2.58), and the dipole matrix in Eq. (2.83) all need to be transformed into the pair-coupling scheme, which is defined by J s K K l r r r r r r = + = + (2.95) where J[i] is the total angular momentum of the target state in the ith channel, l and sr are the orbital and spin angular momentums of the photoelectron, and J is the total angular momentum of the final state. The whole procedure starts with the calculation of these matrices in Initial state: (N+1)-electron ion Doubly excited state Final state: N-electron ion + photoelectron Figure 2.1. The photoionization process. It goes either straight to the final ionized state or passes an intermediate excited state. Boundary of discrete states External region: Two-body system (N-electron state + photoelectron) nucleus Internal region: (N+1)-electron system -10 -8 -6 -4 -2 0 2 4 6 8 10 0 [ q = 0] q = 1 q = 2 The main computational tools in this work are the CIV3 code [33] to generate the discrete wave functions and energy levels, the modified RMATRX1 code [27] (including the LS and BP calculations) to calculate the cross sections, and the QB program [35] to characterize the resonances using the eigenphase sum. Figure 3.1 shows the programs that we used in the present work and the workflow through them. In separate sections, we discuss in detail the use of these programs and how we optimized the calculations to maintain both the accuracy of results and the efficiency of the processes. 3.1.CIV3 Program 3.1.1. Introduction The CIV3 program, developed by Hibbert [33], is a package to construct configuration interaction (CI) wave functions and energies, and to calculate electric-dipole oscillator strengths. The job of CIV3 in our present work is to generate the radial functions of the single-electron orbitals of the N-electron target states for us to feed into the R-matrix program. As introduced in Chapter 2, the CI expansion of the total wave function is Ψ M i LS i i LS φ (3.1) where φiLS are the configurations constructed by coupling the single-electron orbitals in a way to keep the total L and total S common to all configurations, as indicated by LS on both sides of the equation. We choose M to be large enough to cover all the non-negligible configurations contributing to ΨLS. Each orbital onlm
{"url":"https://1library.net/document/y81oggrz-synthesis-enzymatic-studies-selenium-derivatized-nucleosides-nucleotides-nucleic.html","timestamp":"2024-11-03T02:52:56Z","content_type":"text/html","content_length":"229425","record_id":"<urn:uuid:1871c645-c291-4699-8baf-256012f44e20>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00894.warc.gz"}
Getting summarizing values How many PC models does a particular supplier produce? How the average price is defined for computers with the same specifications? The answers to these and other questions associated with some statistic information may be obtained by means of summarizing (aggregate) functions. The following aggregate functions are assumed as standard: Function Description COUNT(*) Returns the number of rows of the table. COUNT Returns the number of values in the specified column. SUM Returns the sum of values in the specified column. AVG Returns the average value in the specified column. MIN Returns the minimum value in the specified column. MAX Returns the maximum value in the specified column. All these functions return a single value. In so doing, the functions COUNT, MIN, and MAX are applicable to any data types, while the functions SUM and AVG are only used with numeric data types. The difference between the functions COUNT(*) and COUNT(<column name>) is that the second does not calculate NULL values (as do other aggregate functions). Example 5.5.1 Find out the minimal and maximal prices for PCs: 1. SELECT MIN(price) AS Min_price, MAX(price) AS Max_price 2. FROM PC; The result is a single row containing the aggregate values: Min_price Max_price 350.0 980.0 Example 5.5.2 Find out the number of available computers produced by the maker А: 1. SELECT COUNT(*) AS Qty 2. FROM PC 3. WHERE model IN(SELECT model 4. FROM Product 5. WHERE maker = 'A' 6. ); As a result we get: Example 5.5.3 If the number of different models produced by the maker A is needed, the query may be written as follows (taking into account the fact that each model in the Product table is shown once): 1. SELECT COUNT(model) AS Qty_model 2. FROM Product 3. WHERE maker = 'A'; Example 5.5.4 Find the number of available different PC models produced by maker A. This query is similar to the preceding one for the total number of models produced by maker A. Now we need to find the number of different models in the PC table (available for sale). To use only unique values in calculating the statistic, the parameter DISTINCT with an aggregate function argument may be used. ALL is another (default) parameter and assumes that all the column values returned (besides NULLs) are calculated. The statement 1. SELECT COUNT(DISTINCT model) AS Qty 2. FROM PC 3. WHERE model IN (SELECT model 4. FROM Product 5. WHERE maker = 'A' 6. ); gives the following result: If we need the number of PC models produced by each maker, we will need to use the GROUP BY clause, placed immediately after the WHERE clause, if any. Suggested exercises: 10, 11, 12, 13, 18, 24, 25, 26, 27, 40, 41, 43, 51, 53, 54, 58, 61, 62, 75, 77, 79, 80, 81, 85, 86, 88, 91, 92, 93, 94, 95, 96, 103, 109, 127, 129
{"url":"http://sql-tutorial.ru/en/book_getting_summarizing_values.html","timestamp":"2024-11-09T07:39:18Z","content_type":"text/html","content_length":"74707","record_id":"<urn:uuid:d96e375e-3092-4df3-8fc8-f9b2606971e8>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00527.warc.gz"}
Syllabus Information Use this page to maintain syllabus information, learning objectives, required materials, and technical requirements for the course. Syllabus Information MTH 105Z - Math in Society Associated Term: Spring 2024 Learning Objectives: Upon successful completion of this course, the student will be able to: 1. Employ mathematical reasoning skills when reading complex problems requiring quantitative or symbolic analysis and demonstrate versatility in the consideration and selection of solution strategies 2. Demonstrate proficiency in the use of mathematical symbols, techniques, and computation that contribute to the exploration of applications of mathematics. 3. Use appropriate mathematical structures and processes to make decisions and solve problems in the contexts of logical reasoning, probability, data, statistics, and financial mathematics 4. Use appropriate representations and language to effectively communicate and interpret quantitative results and mathematical processes orally and in writing 5. Demonstrate mathematical habits of mind by determining the reasonableness and implications of mathematical methods, solutions, and approximations in context Required Materials: Technical Requirements:
{"url":"https://crater.lanecc.edu/banp/bwckctlg.p_disp_catalog_syllabus?cat_term_in=202440&subj_code_in=MTH&crse_numb_in=105Z","timestamp":"2024-11-14T14:24:47Z","content_type":"text/html","content_length":"8236","record_id":"<urn:uuid:b5fc0d60-162d-4f52-9061-225cbffd411e>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00046.warc.gz"}
Digital Signatures in Bitcoin In this article, we learn about digital signatures, and how they are created and verified in the Bitcoin protocol. Table of contents. 1. Introduction. 2. Digital Signatures. 3. Creating a Digital Signature. 4. How Digital Signatures are Verified. 5. Summary. We know that a sender A who wants to transfer funds to sender B has to sign a transaction to prove that they own the private key that corresponds to the public key without having to reveal the private key. It verifies that the sender has the funds he/she intends to send. Just like a bank cheque's signature is used by the sender to authorize a transfer of funds, so are digital signatures in relation to the blockchain. The only exception is that digital signatures used cryptography hashing and algorithms that make it very difficult to forge a signature. In Bitcoin and Etherium, the ECDSA(Elliptic Curve Digital Signature Algorithm) is used to generate digital signatures. For more on the ECDSA algorithm, we can refer to the links provided in the reference section. Elliptic Curve Cryptography is a form of public-key cryptography based on the discrete logarithm problem that is expressed by addition and multiplication on the points of the curve. Elliptic curves come in various types, Bitcoin uses the secp256k1 elliptic curve which is defined by the following function. ${\mathrm{y}}^{2}$ mod p = ${\mathrm{x}}^{3}$ + 7 mod p Above mod p means that the curve is over a finite field of prime order p, hence the following image of a scattered pattern of dots in two dimensions. The above elliptic curve is over a finite field of the prime order 17. On the other hand, the secp256k1 bitcoin curve would be a complex pattern of such dots on a very large grid. By using this algorithm we can use a private key to derive a public key by not a public key to derive the private key. This is referred to as a trap door function since the elliptic curve mathematics works in one direction. This makes transactions secure and tamper-proof in the blockchain. The ECDSA algorithm is implemented using the script programming language functions such as; • OP_CHECKSIG - used to verify that the digital signature used in a transaction input is valid. Returns True or False. • OP_CHECKSIGVERIFY - executed after the former for verification purposes. Returns nothing if it succeeds otherwise an error. • OP_CHECKMULTISIG - compares a digital signature against the public key until a match is found. It returns True or False. • OP_CHECKMULTISIGVERIFY - Similar to the former but executed after it. It returns nothing if successful. Digital Signatures. A digital signature is a mathematical scheme that is used to prove the authenticity of a digitally signed message, in this case, it does this for a transaction in the blockchain. A digitally signed message not only proves authenticity but also proves the non-repudiation and integrity of the message. In Bitcoin a digital signature has three functions namely; • To prove ownership of funds by the owner of the private key intends to send to a recipient. • To prove the above authorization is valid and non-repudiable. • The transaction can't be mutated after it is signed. Digitally signing transactions on the blockchain consists of an algorithm used to create a signature using the private key from a transaction and an algorithm that allows anyone(peers) to verify it. Creating a Digital Signature. We use the following function to create digital signatures; Sig = Fsig(Fhash(m),dA) • Sig - the resulting signature. • Fsig - the signing algorithm. • Fhash - the hashing function. • m - the message, in this case a transaction. • dA - private key used in the signing. The function can be summarized as follows; Fsig generates a temporary key pair comprised of public and private keys. This pair is used to calculate R and S after the transformation involving the signing of the private key Fsig(.., dA) and the transaction hash (Fhash(m)) The temporary pair is based on a random number k. k is also used as a temporary private key, it is also used in the derivation of a corresponding temporary public key P (P = K*G). R in the digital signature is the x coordinate of the temporary public key P. Now S = k^-1(Hash(m) + dA * R) mod p. • k = temporary private key • R = x-coordinate of the temporary public key • p - the prime order. When R and S have been calculated, they are serialized into a byte stream. This byte stream is then encoded using DER. DER(Distinguished Encoding Rules) is secure encryption that guarantees that a digital signature can be used in any environment such as different types of wallets or devices. This is achieved by making its resolution deterministic from the start to the end. In Bitcoin, it guarantees that the digital signatures are secure under all circumstances. Consider the following signature which consists of a byte stream of R and S values; Once we break it down and have the following components; 0x30 - start of the encoding sequence 0x45 - sequence length *0x02 - integer value 0x21 - integer length R - 00884d142d86652a3f47ba4746ec719bbfbd040a570b1deccbb6498c75c4ae24cb 0x02 - another integer 0x20 - integer length S - 4b9f039ff08df09cbe9f6addac960298cad530a863ea8f53982c09db8f6e3813 0x01 - type of signature hash used by the signature. Above, the values R and S are the important ones as we will use them to verify this signature. In Summary the steps of creating a digital signature consist of two major parts; 1. The first involves generating a random number which is multiplied by the generator point on the curve. We take the x-coordinate(half of the digital signature) of the generated point. This is R. 2. The second part involves taking the private key and multiplying it with R([R * private_key]). We then include the message we intend to sign. i.e ([R * private_key] + message). The result of this is the signature - S itself. Now we can use R and S to prove that we own the private key for the corresponding public key. How Digital Signatures are Verified. Digital signature verification requires the digital signature (R, S), the serialized transaction and the public key corresponding to the private key used to create the signature. We say that a digital signature is valid if the owner of the private key used to generate the public key also produced this signature on this transaction. A verification algorithm takes in a message which is then hashed transaction, the signer's public key, and the signature(R and S). The output should be True if the signature is valid, and False Since verification is the inverse of the generation function, R and S together with the public key are used to calculate P - a point on the elliptic curve. We have the following; P = S^-1 * Hash(m) * G + S^-1 * R * Qa, where; • R, S = signature value. • Qa = the recepients public key. • m = the transaction. • G = the elliptic curve generator. If the x-coordinate of P is equal to R, the signature is considered valid, otherwise, it is invalid. In Summary the steps of verifying a digital signature consist of finding three major points on the elliptic curve; Remember our goal here is to verify that the used public key and digital signature were created using the same private key. In this case, the recipient uses the discussed parts in the previous section to find two new points on the curve. 1. To obtain the first point, we divide the message by S i.e (message / S). This point is just the generator point multiplied by this value -> (message / S). 2. To obtain the second point we divide R and S i.e (R / S). This point is just the public key multiplied by this value -> (R / S). 3. Finally to obtain the third point on the curve, we add the above two points. In this case, if the x-coordinate of the third point is similar to the x-coordinate of the random point r that we began with, then it is proof that the digital signature was created using the private key corresponding to this public key. About the Schnorr signature algorithm The Schnorr signature is a digital signature that is produced as a result of the Schnorr signature algorithm. It also uses elliptic curve cryptography. It is known for its simplicity among other features that make it somewhat better than ECDSA such as computational efficiency, smaller storage requirements, and privacy. The reason it was not previously implemented in Bitcoin was because it is patented thereby restricting its use although, at the writing of this article, Bitcoin has already activated the taproot upgrade which incorporates the use of this algorithm. This upgrade incorporates Schnorr signatures making digital signatures more secure and simple to implement. Schnorr signatures are linear meaning that users can use a sum of public keys to sign a sum of signatures. Therefore multiple transactions can be verified at once making validation faster and the network a bit faster. Remember the process of signing is done to each and every input that is contained in a transaction, this allows for multi-party privacy and anonymity of transactions. Private keys are needed in the creation of digital signatures. A digital signature combined with the public key is enough to verify the private key associated with the public key. It is much safer to create a digital signature offline.
{"url":"https://iq.opengenus.org/digital-signatures-in-bitcoin/","timestamp":"2024-11-07T21:55:38Z","content_type":"text/html","content_length":"36777","record_id":"<urn:uuid:53e188c9-4c5b-4b9a-924f-133568ad4e26>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00267.warc.gz"}
Cross Power Spectral Density Spectrum for Noise Modelling and Filter Design Key Takeaways • Cross power spectral density ❲CPSD❳ is the Fourier Transform of the cross-correlation function. Cross-correlation function is a function that defines the relationship between two random signals. • The cross power spectral density, S[xy]❲f❳is complex-valued with real and imaginary parts given by co spectrum ❲Co[xy]❲f❳❳and quadrature spectrum ❲Qu[xy]❲f❳❳respectively. • Coherence functionC[xy]❲f❳ is a measure to estimate how one signal corresponds to another at each frequency and can be called normalized cross power spectral density. Figure 1: Cross spectral density gives the spectral density of two signals If you want to analyze the noise present in an audio amplifier circuit, you will need to choose the appropriate analysis method. When choosing an analysis method, keep in mind that noise signals are random and stochastic in nature. Therefore, FFT analysis should be removed from the list of frequency-domain analysis as it is more an algorithm for converting between time and frequency domains. For noise analysis, power spectral analysis is often the best option. Power spectral analysis plots the noise powers against frequencies, making it easy to understand power-frequency relationships at a Cross power spectral density ❲CPSD❳, or cross-spectrum, is a spectral analysis that compares two signals. It gives the total noise power spectral density of two signals. The only condition is that there should be some phase difference or time delay between these two signals. CPSD analysis is most suitable for studying the effect of stationary, but stochastic signals. Cross Power Spectral Density Spectrum The infinite signal energy associated with the stochastic process limits FFT on stochastic signals. If your objective is to find the amplitude and frequency data of two random signals combined, then spectral density is the right option. The relationship between two time-domain signals can be expressed in frequency-domain using CPSD. It is best to calculate the effect of noise in electronics and communication systems. The effect of thermal noise, shot noise, white noise, and flicker noise in circuits can be analyzed using spectral densities. CPSD spectrum is an inevitable part of direct digital and analog signal analysis. The reliability of the cross-spectrum is accepted in signal processing systems. The CPSD estimation method is used in RF and microwave circuits to identify the phase noise and amplitude noise. CPSD Functions Consider two random signals, x❲t❳and y❲t❳. Our task is to find the amplitude and frequency relationship when these signals are present together in a system. Any correlation between the signals reflects on the CPSD spectrum. If the signals are uncorrelated, the CPSD spectrum will be the power spectral density ❲PSD❳ of x❲t❳and y❲t❳merged to one plot. The CPSD is zero at all frequencies for uncorrelated signals. When the two signals are identical, the CPSD spectrum and PSD spectrum of the signals are the same. Cross-correlation function R[xy]❲τ❳is a function that defines the relationship between two random signals. We can mathematically express R[xy]❲τ❳as: The cross-correlation function is not symmetric around τ=0. It is not always even like an auto-correlation function. However, it satisfies the following property: The CPSD, S[xy]❲f❳is the Fourier Transform of the cross-correlation function and is given as: The CPSD is complex in nature and contains both real and imaginary parts. This is due to the asymmetry associated with the cross-correlation function. If cross-spectrum is represented as a complex function, then: where Co[xy]❲f❳ and Qu[xy]❲f❳ are the cospectrum and quadrature spectrum respectively. The magnitude of the cross power density can be given by: The cospectrum and quad spectrum obeys the following two equations: The last two equations show the changes in the cospectrum and quad spectrum when the order of signals considered is reversed. The CPSD of two signals are not equal when the order of signals considered is reversed. The following equation depicts the relationship: Coherence function Whether a mechanical, electrical, or information system, noise is inherent. Magnitude squared coherence, or simply coherence functionC[xy]❲f❳, is a measure to estimate how one signal corresponds to another at each frequency. It can otherwise be called normalized cross power spectral density, and can be expressed as: where S[x]❲f❳ and S[y]❲f❳are the auto-correlation based PSDs of signals x❲t❳and y❲t❳, respectively. A practical application of CPSD analysis and coherence function is in the Electrocardiograph ❲ECG❳, which plots heart signals by placing electrodes over the chest. Infinite Smoothing Filter Design Using CPSD You want to design a filter in such a way that the input signal x❲t❳to the filter should give the output signal y❲t❳. Here the input and output signals are known, and our task is to design the filter and find out the filter impulse transfer function, h❲t❳. We’ll go for optimal infinite smoothing filter design where the mean squared error is minimum. In this case, the cross-correlation function can be derived as follows: Applying the Fourier convolution theorem to the above equation, it then becomes: Since we are clear about the input signal and the expected output signal, there shouldn’t be much confusion in finding the CPSD S[xy]❲f❳and PSD S[y]❲f❳ of output signal y❲t❳. Rearranging the above equation, the transfer function of the infinite smoothing filter can be obtained with the following equation: The infinite smoothing filter design can be decoded easily with the use of CPSD. Likewise, the same can be applied to design other filters such as band-pass filter, band-limit filters, etc. The skill of an engineer lies in the methods they choose to solve complex problems. With noise signals, FFT analysis has to be considered greatly, though it is particularly necessary in time conversions, due to its non-deterministic behavior it can also prove unreliable. For such signals, CPSD analysis offers better spectral estimation and frequency resolution. When you are dealing with noise modeling, de-noising, or filter design, make sure to use CPSD analysis.
{"url":"https://resources.system-analysis.cadence.com/blog/msa2020-cross-power-spectral-density-spectrum-for-noise-modelling-and-filter-design","timestamp":"2024-11-09T14:33:20Z","content_type":"text/html","content_length":"208746","record_id":"<urn:uuid:7a57222e-631b-4c70-a155-5550ec979402>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00454.warc.gz"}
How to Create a Break-Even Graph in Excel | Techwalla A break-even point represents the number of units you must sell to begin making a profit, given your fixed costs, cost per unit, and revenue per unit. For example, you might want to know the break even point for selling Mylar balloons. If you know the break-even point, you'll know how many balloons you have to sell to make a profit. To graph a break-even point using Excel 2007, you'll need to know your fixed costs (building, equipment maintenance, and so forth) and variable costs (electricity, wages, and other fluctuating costs). On a graph, the break-even point is shown by the intersection between revenue and total cost. Step 1 In cell A1, type "Fixed Cost," and in B1 enter the dollar amount of your fixed costs. For example, the supplier of mylar balloons requires that you pay $100 membership fee to be a buyer, and you are charged that amount no matter how many balloons you buy. In that case you would type "100" into B1. Step 2 In cell A2, type "Cost per Unit," and in B2 enter the dollar amount of the cost per unit. For example, each balloon cost $1. You would enter "1" into B2. Step 3 In cell A3, type "Revenue per Unit," and in B3 enter the dollars amount of the revenue per unit. If you plan to sell your balloons at the county fair, and you know you can charge $6 per balloon, then enter "6" into B3. Step 4 In cell A5, type "Units." In cell A6, enter the number 1. Under the number one (in cell A7) enter the number 2, and continuing entering numbers until you reach 25. Step 5 In cell B5, type "Cost." In B6 type "=A6*$B$2+$B$1" without any quotes. This formula means "Multiply the number of units by the cost per unit, then add the fixed cost." Step 6 Copy B6 and paste it into every cell in the Cost column. In our example, the first cell should read "101," and each cell should grow in value by 1, until the final value is "125." Step 7 In cell C5, type "Revenue." In C6 type "=A6*$B$3" (without any quotes). This formula means "Multiply the numbers of units by the revenue per unit." Step 8 Copy C6 and paste it into every cell in the Revenue Column. In our example, the first cell should read "6," and each cell should grow in value by 6, until the value is "150." Step 9 In cell D5, type "Profit". Profit is Revenue-Cost, so enter the formula "=C6-B6" in cell D6. Step 10 Copy that cell, and paste it into every cell in the Profit column. In our example, the first cell should read "-95" or "(95)" (meaning negative 95). The final column should read "25." Step 11 Highlight the area from A5 to D30 by holding down the left mouse key and mousing over the area. Step 12 Click the Insert tab on the ribbon at the top of the Excel interface. Inside the "Charts" area on the Insert tab, you'll see a "Line" button. Step 13 Click that button then choose "Stacked Line" from the sub menu to insert the chart. The break-even point is the point on the chart where the profit line crosses the cost line.
{"url":"https://www.techwalla.com/articles/how-to-create-a-break-even-graph-in-excel","timestamp":"2024-11-07T23:34:30Z","content_type":"text/html","content_length":"320606","record_id":"<urn:uuid:13f37c76-3806-4836-8f31-dd4f7d220947>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00079.warc.gz"}
JOBZ is CHARACTER*1 [in] JOBZ = 'N': Compute eigenvalues only; = 'V': Compute eigenvalues and eigenvectors. RANGE is CHARACTER*1 = 'A': all eigenvalues will be found. [in] RANGE = 'V': all eigenvalues in the half-open interval (VL,VU] will be found. = 'I': the IL-th through IU-th eigenvalues will be found. UPLO is CHARACTER*1 [in] UPLO = 'U': Upper triangle of A is stored; = 'L': Lower triangle of A is stored. N is INTEGER [in] N The order of the matrix A. N >= 0. A is COMPLEX*16 array, dimension (LDA, N) On entry, the Hermitian matrix A. If UPLO = 'U', the leading N-by-N upper triangular part of A contains the upper triangular part of the matrix A. If UPLO = 'L', [in,out] A the leading N-by-N lower triangular part of A contains the lower triangular part of the matrix A. On exit, the lower triangle (if UPLO='L') or the upper triangle (if UPLO='U') of A, including the diagonal, is LDA is INTEGER [in] LDA The leading dimension of the array A. LDA >= max(1,N). [in] VL VL is DOUBLE PRECISION VU is DOUBLE PRECISION If RANGE='V', the lower and upper bounds of the interval to [in] VU be searched for eigenvalues. VL < VU. Not referenced if RANGE = 'A' or 'I'. [in] IL IL is INTEGER IU is INTEGER If RANGE='I', the indices (in ascending order) of the [in] IU smallest and largest eigenvalues to be returned. 1 <= IL <= IU <= N, if N > 0; IL = 1 and IU = 0 if N = 0. Not referenced if RANGE = 'A' or 'V'. ABSTOL is DOUBLE PRECISION The absolute error tolerance for the eigenvalues. An approximate eigenvalue is accepted as converged when it is determined to lie in an interval [a,b] of width less than or equal to ABSTOL + EPS * max( |a|,|b| ) , where EPS is the machine precision. If ABSTOL is less than or equal to zero, then EPS*|T| will be used in its place, where |T| is the 1-norm of the tridiagonal matrix obtained [in] ABSTOL by reducing A to tridiagonal form. Eigenvalues will be computed most accurately when ABSTOL is set to twice the underflow threshold 2*DLAMCH('S'), not zero. If this routine returns with INFO>0, indicating that some eigenvectors did not converge, try setting ABSTOL to See "Computing Small Singular Values of Bidiagonal Matrices with Guaranteed High Relative Accuracy," by Demmel and Kahan, LAPACK Working Note #3. M is INTEGER [out] M The total number of eigenvalues found. 0 <= M <= N. If RANGE = 'A', M = N, and if RANGE = 'I', M = IU-IL+1. W is DOUBLE PRECISION array, dimension (N) [out] W On normal exit, the first M elements contain the selected eigenvalues in ascending order. Z is COMPLEX*16 array, dimension (LDZ, max(1,M)) If JOBZ = 'V', then if INFO = 0, the first M columns of Z contain the orthonormal eigenvectors of the matrix A corresponding to the selected eigenvalues, with the i-th column of Z holding the eigenvector associated with W(i). If an eigenvector fails to converge, then that column of Z [out] Z contains the latest approximation to the eigenvector, and the index of the eigenvector is returned in IFAIL. If JOBZ = 'N', then Z is not referenced. Note: the user must ensure that at least max(1,M) columns are supplied in the array Z; if RANGE = 'V', the exact value of M is not known in advance and an upper bound must be used. LDZ is INTEGER [in] LDZ The leading dimension of the array Z. LDZ >= 1, and if JOBZ = 'V', LDZ >= max(1,N). WORK is COMPLEX*16 array, dimension (MAX(1,LWORK)) [out] WORK On exit, if INFO = 0, WORK(1) returns the optimal LWORK. LWORK is INTEGER The length of the array WORK. LWORK >= 1, when N <= 1; otherwise 2*N. For optimal efficiency, LWORK >= (NB+1)*N, where NB is the max of the blocksize for ZHETRD and for [in] LWORK ZUNMTR as returned by ILAENV. If LWORK = -1, then a workspace query is assumed; the routine only calculates the optimal size of the WORK array, returns this value as the first entry of the WORK array, and no error message related to LWORK is issued by XERBLA. [out] RWORK RWORK is DOUBLE PRECISION array, dimension (7*N) [out] IWORK IWORK is INTEGER array, dimension (5*N) IFAIL is INTEGER array, dimension (N) If JOBZ = 'V', then if INFO = 0, the first M elements of [out] IFAIL IFAIL are zero. If INFO > 0, then IFAIL contains the indices of the eigenvectors that failed to converge. If JOBZ = 'N', then IFAIL is not referenced. INFO is INTEGER = 0: successful exit [out] INFO < 0: if INFO = -i, the i-th argument had an illegal value > 0: if INFO = i, then i eigenvectors failed to converge. Their indices are stored in array IFAIL.
{"url":"https://netlib.org/lapack/explore-html-3.4.2/dd/d95/zheevx_8f.html","timestamp":"2024-11-10T11:53:46Z","content_type":"application/xhtml+xml","content_length":"20879","record_id":"<urn:uuid:0e4b9e9d-698e-498b-8b08-ac600d4cafa4>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00444.warc.gz"}
Gonzalez Camacho, J.M. Gonzalez Camacho, J.M. Research Projects Organizational Units Last Name Gonzalez Camacho Gonzalez Camacho, J.M. Search Results • Genomic-enabled prediction with classification algorithms (Springer Nature, 2014) Ornella, L.; Pérez-Rodríguez, P.; Tapia, E.; Gonzalez Camacho, J.M.; Burgueño, J.; Xuecai Zhang; Singh, S.; San Vicente Garcia, F.M.; Bonnett, D.; Dreisigacker, S.; Singh, R.P.; Long, N.; Crossa, J. Pearson’s correlation coefficient (ρ) is the most commonly reported metric of the success of prediction in genomic selection (GS). However, in real breeding ρ may not be very useful for assessing the quality of the regression in the tails of the distribution, where individuals are chosen for selection. This research used 14 maize and 16 wheat data sets with different trait–environment combinations. Six different models were evaluated by means of a cross-validation scheme (50 random partitions each, with 90% of the individuals in the training set and 10% in the testing set). The predictive accuracy of these algorithms for selecting individuals belonging to the best α=10, 15, 20, 25, 30, 35, 40% of the distribution was estimated using Cohen’s kappa coefficient (κ) and an ad hoc measure, which we call relative efficiency (RE), which indicates the expected genetic gain due to selection when individuals are selected based on GS exclusively. We put special emphasis on the analysis for α=15%, because it is a percentile commonly used in plant breeding programmes (for example, at CIMMYT). We also used ρ as a criterion for overall success. The algorithms used were: Bayesian LASSO (BL), Ridge Regression (RR), Reproducing Kernel Hilbert Spaces (RHKS), Random Forest Regression (RFR), and Support Vector Regression (SVR) with linear (lin) and Gaussian kernels (rbf). The performance of regression methods for selecting the best individuals was compared with that of three supervised classification algorithms: Random Forest Classification (RFC) and Support Vector Classification (SVC) with linear (lin) and Gaussian (rbf) kernels. Classification methods were evaluated using the same cross-validation scheme but with the response vector of the original training sets dichotomised using a given threshold. For α=15%, SVC-lin presented the highest κcoefficients in 13 of the 14 maize data sets, with best values ranging from 0.131 to 0.722 (statistically significant in 9 data sets) and the best RE in the same 13 data sets, with values ranging from 0.393 to 0.948 (statistically significant in 12 data sets). RR produced the best mean for both κ and RE in one data set (0.148 and 0.381, respectively). Regarding the wheat data sets, SVC-lin presented the best κ in 12 of the 16 data sets, with outcomes ranging from 0.280 to 0.580 (statistically significant in 4 data sets) and the best RE in 9 data sets ranging from 0.484 to 0.821 (statistically significant in 5 data sets). SVC-rbf (0.235), RR (0.265) and RHKS (0.422) gave the best κ in one data set each, while RHKS and BL tied for the last one (0.234). Finally, BL presented the best RE in two data sets (0.738 and 0.750), RFR (0.636) and SVC-rbf (0.617) in one and RHKS in the remaining three (0.502, 0.458 and 0.586). The difference between the performance of SVC-lin and that of the rest of the models was not so pronounced at higher percentiles of the distribution. The behaviour of regression and classification algorithms varied markedly when selection was done at different thresholds, that is, κ and RE for each algorithm depended strongly on the selection percentile. Based on the results, we propose classification method as a promising alternative for GS in plant breeding. • Applications of machine learning methods to genomic selection in breeding wheat for rust resistance (Crop Science Society of America, 2018) Gonzalez Camacho, J.M.; Ornella, L.; Pérez-Rodríguez, P.; Gianola, D.; Dreisigacker, S.; Crossa, J. New methods and algorithms are being developed for predicting untested phenotypes in schemes commonly used in genomic selection (GS). The prediction of disease resistance in GS has its own peculiarities: a) there is consensus about the additive nature of quantitative adult plant resistance (APR) genes, although epistasis has been found in some populations; b) rust resistance requires effective combinations of major and minor genes; and c) disease resistance is commonly measured based on ordinal scales (e.g., scales from 1?5, 1?9, etc.). Machine learning (ML) is a field of computer science that uses algorithms and existing samples to capture characteristics of target patterns. In this paper we discuss several state-of-the-art ML methods that could be applied in GS. Many of them have already been used to predict rust resistance in wheat. Others are very appealing, given their performance for predicting other wheat traits with similar characteristics. We briefly describe the proposed methods in the Appendix. • Comparison between linear and non-parametric regression models for genome-enabled prediction in wheat (Genetics Society of America, 2012) Pérez-Rodríguez, P.; Gianola, D.; Gonzalez Camacho, J.M.; Crossa, J.; Manes, Y.; Dreisigacker, S. In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models. • Genome-enabled prediction of genetic values using radial basis function neural networks (Springer, 2012) Gonzalez Camacho, J.M.; De Los Campos, G.; Pérez-Rodríguez, P.; Gianola, D.; Cairns, J.E.; Mahuku, G.; Babu, R.; Crossa, J. The availability of high density panels of molecular markers has prompted the adoption of genomic selection (GS) methods in animal and plant breeding. In GS, parametric, semi-parametric and non-parametric regressions models are used for predicting quantitative traits. This article shows how to use neural networks with radial basis functions (RBFs) for prediction with dense molecular markers. We illustrate the use of the linear Bayesian LASSO regression model and of two non-linear regression models, reproducing kernel Hilbert spaces (RKHS) regression and radial basis function neural networks (RBFNN) on simulated data and real maize lines genotyped with 55,000 markers and evaluated for several trait?environment combinations. The empirical results of this study indicated that the three models showed similar overall prediction accuracy, with a slight and consistent superiority of RKHS and RBFNN over the additive Bayesian LASSO model. Results from the simulated data indicate that RKHS and RBFNN models captured epistatic effects; however, adding non-signal (redundant) predictors (interaction between markers) can adversely affect the predictive accuracy of the non-linear regression models.
{"url":"https://repository.cimmyt.org/entities/person/93a0965c-7cf2-4ef6-84cd-e0f211889de5","timestamp":"2024-11-14T01:22:06Z","content_type":"text/html","content_length":"705187","record_id":"<urn:uuid:572ab7ad-6508-4042-a8a6-d3e5d789b8f3>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00098.warc.gz"}
Parabolas - effect of parameter 'a' This section looks at the effect of changing the parameter $a$ in the Cartesian equation of the parabola $y^2 = 4 a x$. Here is the same parabola, plotted with different values of $a$: Changing the value of $a$ moves the position of the focus and the directrix, which in turn changes the curve. The smaller the value of $a$, the closer the focus and directrix are to the origin. All parabolas are the same shape An interesting fact about parabolas is that they are all similar (ie the same shape). Making $a$ smaller makes the entire parabola smaller, but doesn't change its shape. Other curves have the same property, for example all circles are the same shape - changing the radius makes a different sized circle, but doesn't change its shape. This shouldn't really be a surprise. When we make $a$ smaller, we move the focus and directrix closer together, but we then draw the parabola in exactly the same way, so of course we just end up with a smaller parabola of the exact same shape. But that may not be entirely obvious from looking at the image above - as $a$ get smaller the parabola appears to be narrower and more pointed. That is simply because you can see more of the shape. In this diagram, the blue shaded rectangle shows the equivalent portion of the curve for each value of $a$. If you look closely you will see that they are identical shapes: As a further illustration, in this animation we draw two parabolas, with $a = 1$ and $a = 0.25$, and zoom in on the smaller parabola until they are both the same size: See also Join the GraphicMaths Newletter Sign up using this form to receive an email when new content is added:
{"url":"https://www.graphicmaths.com/pure/coordinate-systems/parabola-effect-of-a/","timestamp":"2024-11-05T10:19:23Z","content_type":"text/html","content_length":"28015","record_id":"<urn:uuid:2518122c-dc24-470a-a573-1f8d5a5dc8a3>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00760.warc.gz"}
Conditional Remix & Share Permitted CC BY-NC-SA Demonstrates how to compute with positive and negative fractions and compare the answers. [5:59] Khan Academy learning modules include a Community space where users can ask questions and seek help from community members. Educators should consult with their Technology administrators to determine the use of Khan Academy learning modules in their classroom. Please review materials from external sites before sharing with students. Material Type: Date Added:
{"url":"https://openspace.infohio.org/browse?f.keyword=expressions&batch_start=20","timestamp":"2024-11-01T22:08:09Z","content_type":"text/html","content_length":"163913","record_id":"<urn:uuid:eae06c31-2bd1-4a82-9fd4-d1b8aaa01323>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00063.warc.gz"}
Research on Chlorophyll-a Concentration Retrieval Based on BP Neural Network Model—Case Study of Dianshan Lake, China School of Marine Science, Shanghai Ocean University, Shanghai 201306, China Shanghai Estuary Marine Surveying and Mapping Engineering Technology Research Center, Shanghai 201306, China Key Laboratory of Marine Ecological Monitoring and Restoration Technologies, Shanghai 201306, China Authors to whom correspondence should be addressed. Submission received: 14 April 2022 / Revised: 28 June 2022 / Accepted: 1 July 2022 / Published: 20 July 2022 The Chlorophyll-a (Chl-a) concentration is an important indicator of water environmental conditions; thus, the simultaneous monitoring of large-area water bodies can be realized through the remote sensing-based retrieval of the Chl-a concentrations. The back propagation (BP) neural network learning method has been widely used for the remote sensing retrieval of water quality in first and second-class water bodies. However, many Chl-a concentration measurements must be used as learning samples with this method, which is constrained by the number of samples, due to the limited time and resources available for simultaneous measurements. In this paper, we conduct correlation analysis between the Chl-a concentration data measured at Dianshan Lake in 2020 and 2021 and synchronized Landat-8 data. Through analysis and study of the radiative transfer model and the retrieval method, a BP neural network retrieval model based on multi-phase Chl-a concentration data is proposed, which allows for the realization of remote sensing-based Chl-a monitoring in third-class water bodies. An analysis of spatiotemporal distribution characteristics was performed, and the method was compared with other constructed models. The research results indicate that the retrieval performance of the proposed BP neural network model is better than that of models constructed using multiple regression analysis and curve estimation analysis approaches, with a coefficient of determination of 0.86 and an average relative error of 19.48%. The spatial and temporal Chl-a distribution over Dianshan Lake was uneven, with high concentrations close to human production and low concentrations in the open areas of the lake. During the period from 2020 to 2021, the Chl-a concentration showed a significant upward trend. These research findings provide reference for monitoring the water environment in Dianshan Lake. 1. Introduction Chlorophyll-a (Chl-a) is an important indicator for the evaluation of water quality, which is often used to evaluate the eutrophication of water bodies [ ]. Lakes are important freshwater resources with multiple functions, such as flood control and release, water source, runoff, shipping, and aquaculture; however, with the continuous interference of human production and living, the pollution problem in lakes has become increasingly serious, affecting the sustainable development of water resources. Therefore, the monitoring of water quality parameters is of important social significance [ ]. On one hand, water quality is traditionally monitored through manual sampling, laboratory analysis, and other processes, which are time-consuming, costly, and do not reflect the overall water quality status well. Furthermore, large-scale real-time monitoring cannot be achieved [ ]. On the other hand, remote sensing technology has advantages of low cost, wide range, high monitoring efficiency, and dynamic monitoring. Remote sensing technology is becoming increasingly important in the water quality monitoring field [ ]. The Landsat-8 satellite data chosen for this study have been widely used, as they are completely open and freely available. Not only has the spatial resolution been improved and the band setting optimized, but it also has advantages in water quality monitoring through ocean and lake remote sensing [ At present, the retrieval of the Chl-a concentration in water using remote sensing is mainly achieved through empirical, analytical, semi-empirical, and machine learning methods [ ]. In empirical methods, a mathematical relationship between the measured Chl-a concentration and the spectral band information of a water body is derived [ ]. In semi-empirical methods, a retrieval model is constructed based on the empirical method, using a combination of the optical characteristics and statistical analysis of the water body [ ]. In analytical methods, the radiative transfer mechanism and optical properties (e.g., the Chl-a in water) are analyzed, and a retrieval model with physical significance is constructed [ ]. Based on the above different remote sensing-based retrieval methods, a variety of retrieval models for chlorophyll a concentration has been constructed. Based on empirical methods, single-band, band ratio and band combination models have been constructed to estimate the concentration of Chl-a in first and second-class water bodies [ ]. Based on the radiative transfer model or bio-optical models, the Chl-a concentration in second-class water bodies has been estimated; high accuracy and good retrieval results were achieved [ ]. A retrieval model for Chl-a concentration has been constructed based on an analytical method, and good retrieval results were also achieved [ ]. First-class water bodies have good water quality, less pollution, and mature remote sensing retrieval technology, while second-class water bodies are slightly polluted, with the water containing phytoplankton, suspended sediment, and colored dissolved organic matter, which have a certain influence on the retrieval the Chl-a concentration in the water body. Third-class water bodies are polluted, the water body is eutrophic, and there is significant phytoplankton, suspended sediment, and colored dissolved organic matter, which affect the spectral curve of the water body, thereby affecting the determination of the Chl-a concentration and increasing the difficulty of retrieval research. Many studies have considered the retrieval of water Chl-a concentration in first and second-class water bodies; however, in comparison, there have been few studies focused on third class water bodies. Some neural network algorithms such as BP neural network [ ], convolutional neural network [ ] and hybrid neural network [ ] are applied to the study of Chl-a concentration retrieval in water bodies. Among them, the BP neural network is especially simple to construct, can simulate the nonlinear problems of complex water bodies, and is widely used. Compared with traditional retrieval models, it has been found that the retrieval effect of BP neural network models is often superior, and their retrieval accuracy is typically high [ ]. The development of machine learning algorithms provides new ideas for the retrieval of Chl-a concentration in third-class water bodies. In the paper, based on the BP neural network model, we retrieved and validated the Chl-a concentration in Dianshan Lake using two periods’ data of Chl-a concentration with two Landsat-8 remote sensing images. The retrieval of Chl-a concentration in the third-class water body Dianshan Lake was achieved, and the spatial and temporal variation characteristics of the Chl-a concentration in Dianshan Lake were analyzed. We compared the proposed model with band combination retrieval models and determined that the BP neural network model had a better retrieval effect. The experimental results indicate that the strategy of joint retrieval using remote sensing and actual measurement data from two years is feasible, overcoming the situation where individual remote sensing images may correspond to less actual measurement data. According to the data released by the Shanghai Water Environment Monitoring Center, the overall water quality of Dianshan Lake classifies it as a third-class water body. 2. Research Data Introduction 2.1. Study Area Dianshan Lake (31°04′–31°12′ N, 120°54′–121°01′ E) is located in the lower reaches of the Yangtze River, at the junction of Shanghai and Jiangsu (see Figure 1 ). This lake is the largest natural lake in Shanghai, with a total area of approximately 62 km , of which 46.7 km (accounting for 75.3% of the entire lake) is in the Shanghai area. Dianshan Lake is a shallow lake in a plain water network area, and is shaped similarly to a gourd, that is, wide in the south and narrow in the north and its terrain is sloped from west to east. The lake mainly undertakes the inflow of water from the Taihu Lake Basin. The water flows through the Huangpu River to the mouth of the Yangtze River and into the East China Sea [ ]. Many rivers go in and out of the lake, resulting in abundant water resources, giving the lake both economic and social significance. 2.2. Measured Data Water samples were collected from Dianshan Lake on 21 December 2020 and 14 November 2021. The sky was cloudless, and the water surface was calm on the day of sampling. A total of 80 sampling points were used (see Figure 1 ). The geographic location information of sampling points was accurately recorded through global data corresponding to the sampling points. According to the principle of equilibrium and randomness, 80% of the sampling points were taken as the modeling points for the retrieval model, while the remaining 20% were used to test the retrieval performance of the model; that is, 64 sampling points were used for model construction and 16 sampling points were used for testing (location and Chl-a concentration of the test samples are listed in Table 1 ). The Chl-a concentration of the samples was measured at the Shanghai University Engineering Research Center for Water Environment Ecology through acetone extraction spectrophotometry (statistical data of measured Chl-a concentration are shown in Table 2 ). The experimental principle involved extracting and determining the Chl-a concentration with the 90% acetone solvent through repeated grinding, extraction, and centrifugation. The acetone extraction spectrophotometric method is simple to operate and has high determination accuracy [ 2.3. Landsat-8 Remote Sensing Image The Landsat-8 data used in this research were obtained from the official website of the United States Geological Survey. The satellite parameters are shown in Table 3 . According to the principle of simultaneous registration, the Landsat-8 data covering the Dianshan Lake area on 22 December 2020 and 14 November 2021 were selected as the research objects. The cloud content of the 22 December 2020 image data was 0.61%, while that of the 14 November 2021 image data was 0.2%. The data were captured by the Operational Land Imager sensor, and the radiance value (Digital Number value) was recorded. The Digital Number values in the remote sensing images needed to be converted into radiance values. Moreover, the sensor is affected by atmospheric molecules, cloud particles, aerosols, and other factors when receiving the reflection information of ground objects, resulting in atmospheric radiation information being present in the data. This leads to the spectral information of the ground objects differing from the obtained spectral information. Therefore, radiometric calibration and atmospheric correction of the images were required in order to ensure that accurate ground reflection information was obtained and, in turn, to ensure the accuracy of water quality monitoring through remote sensing data. After image preprocessing, the water body of Dianshan Lake was extracted, using a mask, for the subsequent band operations. 3. Research Methods 3.1. BP Neural Network Modeling Analysis A huge sample size is generally required for BP neural network learning. Considering the time limitations associated to synchronous water sample collection, the sampling in the two phases was combined. The feasibility analysis was performed as follows: The input parameter of the BP neural network was the water body reflectivity band combination. The water body reflectivity is related to the nature of the water body. As remote sensing data with the same pre-processing (radiometric calibration, atmospheric correction) were used, and as the image data used were all obtained in winter in the same study area, the homogeneity of the obtained water reflectance remote sensing images was ensured; The water sample collection method and water Chl-a concentration measurement method were kept consistent for both times, ensuring the reliability and consistency of the accuracy of measured Chl-a concentration accuracy; The results were derived according to the radiation transfer model formula given in [ ]. The bottom reflectance can be ignored, as light cannot reach the bottom of the lake, due to its depth and transparency. Therefore, the main factors affecting the reflectance of the entire water body were the concentration of Chl-a and suspended solids. In summary, adoption of the BP neural network model, the water reflectivity training samples, and the measured Chl-a concentrations were considered sufficient to establish a BP neural network prediction and retrieval model for chlorophyll concentration. Another influencing factor was the concentration of suspended solids. If measured data are available, then a predictive retrieval model can also be established. 3.2. Principle of BP Neural Network Method A BP neural network is a multi-layer feed-forward neural network trained using the error back propagation algorithm, which includes the forward and error backpropagation processes [ ]. In the forward propagation process, the input is passed from the input layer to the hidden layer, processed layer-by-layer in the hidden layers, and then passed to the output layers. If an error exists between the output result and the expected value, then the error back propagation process is immediately executed. Next, a new round of forward propagation is performed after the backpropagation. The forward propagation and error back propagation processes are repeated until the minimum error between the expected value and the output result meets the requirements. The complete BP neural network structure mainly includes an input layer, several hidden layers, and an output layer, where each layer consists of several nodes (or neurons). The neurons in the same layer are not connected with or affected by each other. The state of each layer of neurons only has an impact on the state of the next layer of neurons, and all of the layers are connected. A three-layer shallow neural network with one hidden layer was used in this research [ ]. The network has been shown to be capable of approximating any nonlinear function and learning and simulating complex nonlinear relationships [ 3.3. Parameter Selection of BP Neural Network Model The correlation between the measured Chl-a concentration data and corresponding single-band remote sensing reflectance was analyzed. Table 4 shows that some single bands were not highly correlated with the chlorophyll concentration. If the single band is directly used as an input to the BP neural network, then the validity of the Chl-a concentration retrieval cannot be guaranteed. Therefore, the correlation between the various bands of Landsat-8 and combinations of bands with the concentration of Chl-a requires further study. A total of 436 combinations of bands were obtained in this research, where the highest correlation between the band combinations and the measured Chl-a concentration was 0.8, indicating a strong correlation, however, not all of the band combinations were highly correlated with the measured values. Therefore, according to the correlation, 66 combinations with a correlation higher than 0.5 or lower than −0.5 were selected, as listed in Table 5 Among the 66 kinds of band combinations, the 3 band combinations with the highest correlation in the same combination type were selected, and the combination types without three items were eliminated. Nine band combination types remained, which are listed in Table 6 Each of the nine combination types with high correlation in Table 6 were taken as an input to the input layer of the BP neural network. Meanwhile, the number of neuron nodes was determined using an empirical formula in order to prevent the model from suffering from overfitting due to excessive nodes or from the disconnection of the input and the output due to insufficient nodes [ ]. The following empirical formula was used [ is the number of hidden layer nodes, is the number of input layer nodes, is the number of output layer nodes, and is in the range 0–10. Through calculation, the value range of the number of hidden layer nodes was 2–12. 3.4. Construction of BP Neural Network Model As the input and output of the proposed BP neural network model were remote sensing reflectance values and Chl-a concentration, respectively, the range of variables should be unified and normalized before the network is established, within the range [−1, 1]. Before outputting the retrieval results, inverse normalization was required to output the real retrieval results. Then, the nine types of band combinations mentioned above were separately used as the input layer of the BP neural network model, with the retrieval Chl-a concentration as the output layer. The number of hidden layer nodes was 2–12, and the network training function adopted was trainlm. The activation function of the hidden layer was the sigmoid hyperbolic tangent function (tansig), and the output layer function used was the linear function (purelin). The maximum training times were set to 2000 times, and the convergence error was 0.00001 [ Through the repeated training and multiple tests of the model, the differences between the mean absolute error ( ) and the coefficient of determination ( -squared) between the retrieval results with varying numbers of nodes in different hidden layers under the input layer of different combinations of the test samples and the corresponding measured Chl-a concentration from sampling were calculated. The best BP neural network model was determined by taking the mean absolute error and -squared as the standard. The flow chart of building the BP neural network model is shown in Figure 2 . Generally, the larger the -squared and the smaller the mean absolute error, the higher the accuracy of the model and the smaller the deviation. $R 2 = ∑ i = 1 n ( A i − C ¯ ) 2 ∑ i = 1 n ( C i − C ¯ ) 2$ $M A E = 1 n ∑ i = 1 n A i − C i$ is the measured value, is the number of samples (denoted by = 1, 2,⋯, $C ¯$ is the average of the measured values, and is the value calculated by the retrieval model. -squared, the root means square error ( ), and the mean relative error ( ) were selected to serve as the standard of model accuracy evaluation. In general, the larger the R-Squared, the smaller the root mean square error, and the closer the mean relative error to 0, the higher the accuracy of the model and the smaller the bias. $R M S E = ∑ i = 1 n A i − C i 2 n$ $M R E = 100 % n ∑ i = 1 n C i − A i C i$ is the measured value, is the number of samples, denoted by = 1, 2…, $C ¯$ is the average of the measured values, and is the value calculated by the retrieval model. 3.5. Construction of Band Combination Model Multiple linear regression and curve estimation analyses were used to construct a band combination retrieval model. Multiple linear regression analysis was used to select the top five band combinations as independent variables—namely, (B1 + B7) − B3/B1, (B1 − B7) − B3/B1, (B3 − B1) − B3/B1, (B1 + B7) + (B1 − B3), (B1 − B3) + (B1 − B7)), and the measured Chl-a concentration was the dependent variable. The multiple regression analysis was carried out using the SPSS software, and a retrieval model was constructed. The results are shown in Table 7 Table 8 Table 9 below, where Table 7 shows the variables entered or removed, Table 8 provides the fitting results for the model, and Table 9 lists the coefficients of various variables in the model equation. In summary, the following model equation was derived: Chl-a = 61.41 × ((B1 − B3) + (B1 − B7)) − 29.21 × ((B3 − B1) − B3/B1) − 71.715 × ((B1 + B7) + (B1 − B3)) − 12.949 where Chl-a represents the retrieval result, B1 represents the remote sensing reflectance of Landsat-8 remote sensing data Band1, B3 represents the remote sensing reflectance of Landsat-8 remote sensing data Band3, and B7 represents the remote sensing reflectance of Landsat-8 remote sensing data Band7. According to the above correlation analysis, the band combination with the highest correlation—namely, (B1 + B7) − B3/B1—was selected to conduct curve estimation analysis through the SPSS software, in order to construct a retrieval model. A total of four function retrieval models were constructed: namely, a linear function model, a quadratic function model, a cubic function model, and an exponential function model. As shown in Table 10 below, the best fitting effect of the four function models was obtained with the cubic function model, and the fitting coefficient was 0.65. 4. Results 4.1. Results of the BP Neural Network Model The results indicated that when the input layer of the BP neural network was set to the (B1 − B3)/(B1 − B7), (B3 − B1)/(B7 − B1), and (B3 − B1)/(B1 − B7) band combination and the number of neurons in the hidden layer was 2, the retrieval result of the BP neural network model presented the highest correlation with the measured value, with an R-Squared value of 0.86, and a mean absolute error of This optimal BP neural network model was used to retrieve the Chl-a concentration for the test sample. The retrieval results and the corresponding measured Chl-a concentration were analyzed for precision evaluation. The results demonstrated that the -Squared value of the retrieval results between the test samples and the measured values was 0.86, the root mean square error was 1.69 μg/L, and the average relative error was 19.48%. Figure 3 clearly shows that the measured values showed the same trend as the retrieval result, indicating the small retrieval error of the model and its good retrieval effect. 4.2. Results of the Band Combination Model Chl-a concentrations were retrieved using the model constructed by multiple regression analysis. The results indicate that the -Squared value of the retrieval results of the test samples and the measured values was 0.8, the root mean square error was 2.08 μg/L, and the average relative error was 23.62%. A comparison between the retrieval results and the measured values are shown in Figure 4 . The retrieval results of the model constructed by multiple regression analysis and the measured values showed the same trend of change, and the error was relatively small. When the chlorophyll concentration was low, the concentration value was overestimated. The four functional models constructed by curve estimation analysis were used to carry out retrieval for comparison. The mean absolute error and R-Squared of the retrieval results and the measured values were used as criteria to determine the best retrieval model. It can be seen from Figure 5 that the coefficient of determination of the cubic function model was the highest, and the mean absolute error was the smallest. Among the four models, the best fitting effect was achieved by the cubic function model (see Table 10 ). Therefore, the cubic function model was determined to be the best retrieval model among the four models constructed by curve estimation analysis and, so, the cubic function model was used for A comparison between the retrieval results of the cubic function model and the measured values is shown in Figure 6 . The retrieval results of the cubic function model and the measured values showed a similar trend, and the error was smaller than that of the model constructed by multiple regression analysis. However, the same situation of overestimating the concentration values was observed: when the chlorophyll concentration was low, the concentration value was overestimated. 4.3. Comparative Analysis of Model Results Accuracy evaluation and an analysis of the retrieval models were carried out, and the retrieval effects of the models were compared; the results are shown in Table 11 below. The retrieval effect of the model constructed using curve estimation analysis was better than that of the model constructed by multiple regression analysis, showing a higher -Squared value. The mean relative error and the root mean square error were also smaller than those of the multiple regression analysis-based model. Therefore, among the models constructed by the two regression methods, the model constructed by curve estimation analysis was more suitable for the Dianshan Lake research area. Among the three retrieval models, the BP neural network model presented the smallest the mean relative error and root mean square error, with mean relative error lower than 20% and -Squared value of 0.86. Therefore, the proposed model has certain feasibility. In general, the retrieval accuracy of the BP neural network model was better than that of the band combination model. However, the correlation between the model retrieval results and the measured values was slightly worse than that of the retrieval model constructed by curve estimation analysis. 4.4. Spatiotemporal Analysis of Chl-a Concentration In this research, preprocessed Landsat-8 remote sensing data were used for band operations. Combined with the optimal BP neural network model selected above, where the input to the BP neural network was (B1 − B3)/(B1 − B7), (B3 − B1)/(B7 − B1), (B3 − B1)/(B1 − B7); the number of nodes in the hidden layer was 2; and the number of nodes in the output layer was 1—Chl-a concentration in the water body of Dianshan Lake for 2020 and 2021 was retrieved. According to the spatial distribution of the retrieval results, as shown in Figure 7 , the concentration of Chl-a in Dianshan Lake in 2021 was nearly double that in 2020: the concentration of Chl-a in 2020 was in the range of 0.84–7.17 μg/L, while that in 2021 is in the range of 5.91–12.31 μg/L. From the perspective of spatial distribution, the Chl-a concentration in Dianshan Lake was unevenly distributed and, so, the concentration of Chl-a in the lake varied greatly. This was mainly because Dianshan Lake receives incoming water from Taihu Lake, with many rivers entering and exiting, such that the water body in the lake has strong fluidity. It is also affected by human production and living, as well as the sewage discharge from aquaculture areas, making the Chl-a concentration near the shore higher than that in the open center of the lake. 5. Discussion In the paper, two types of band combination models and a BP neural network model were constructed. In order to construct two types of band combination models, firstly we combined the bands of remote sensing images by using four operations of mathematics and logarithmic operations. These operations could improve the correlation between remote sensing reflectivity and Chl-a concentration in water, as well as the retrieval effect of Chl-a concentration in water. Then multiple linear regression analysis and curve estimation analysis were used, respectively, as algorithms for constructing two types of band combination models. Comparison to other papers [ ], two types of regression analyses to construct the band combination model. Furthermore, we discovered that the band combination model based on curve estimation analysis had a better retrieval -Squared is 0.87, root mean square error is 1.72 μg/L, and average relative error is 22.45%. When the retrieval effect of the band combination model of literature 11 and literature 15 [ ] was examined, the R-Squared of both was less than 0.87. Compared with the retrieval accuracy of the literature 11 and 15 band combination models, the retrieval accuracy of this paper has been improved. This may be because the modeling method in this paper found the best band combination of correlation and the most appropriate regression analysis method to build the band combination retrieval model. Studies have shown that [ ] the BP neural network model has a better retrieval effect than the retrieval model based on empirical and semi-empirical methods, and its R-Squared higher than 0.8 and root mean square error within a reasonable range. The results of this research also revealed that the BP neural network model retrieval impact was better than the band combination model, with an R-Squared of 0.86 and a root mean square error of 1.69 μg/L mean relative error of 19.48%. This demonstrates that the BP neural network is adaptable in retrieving Chl-a concentrations in aquatic bodies. The BP neural network model was compared to various neural network models, such as the convolutional neural network model and the hybrid neural network model [ ]. The BP neural network model was relatively easy to construct, had a rapid processing rate, and can mimic nonlinear connections in complicated water bodies [ ]. Although the convolutional neural networks model had the characteristics of minimizing noise sensitive and learning high abstraction and was suited for recovering Chl-a concentration in the Pearl River estuary, which includes turbid water bodies [ ], its model construction is difficult. Furthermore, Dianshan Lake is an inland lake, and the water body’s features differ from those of the Pearl River estuary, especially pollutant and suspended matter concentrations. The research has also shown simplicity and better accuracy, using the BP neural network model to retrieval Chl-a concentration of Dianshan Lake water body in the paper. However, when compared to other studies on the retrieval of Chl-a concentration in water bodies [ ], the number of samplings in our study was low. Therefore, the number of samplings should be increased in order to achieve long series quantitative retrieval of Chl-a concentration in Dianshan Lake water bodies. As the same sensor data were used for the two phases and the remote sensing image data underwent the same preprocessing (i.e., radiometric calibration, atmospheric correction, and water and land separation), the nature and quality of data used as the input to neural network were ensured to be uniform. The experimental results presented in this paper indicate that the joint retrieval of data considering two phases is feasible. The different altitude and azimuth angles of the satellite and the sun when the two phases of remote sensing data are imaged may influence the retrieval results; this effect will be investigated in the future. 6. Conclusions Based on two phases of Landsat-8 satellite images and measured the Chl-a concentration data, we constructed two types of band combination model and a BP neural network model to retrieve the Chl-a concentration in Dianshan Lake. The band combination models were constructed by using multiple linear regression analysis and curve estimation approaches in order to identify the modeling approach better adapted to the study area and obtain optimal retrieval results. The results demonstrated that the accuracy of the best curve estimation analysis-based model was higher than that of the multiple regression analysis-based model, making the former more applicable for the retrieval of the Chl-a concentration in Dianshan Lake. Comparative analysis of the retrieval results from the combination models and the BP neural network model indicate that the BP neural network model has certain advantages; namely, the BP neural network model obtained the highest retrieval accuracy and the best retrieval effect, and the BP neural network model successfully retrieved the Chl-a concentration in Dianshan Lake. Therefore, the proposed model can provide research guidance for the subsequent retrieval of Chl-a concentration in Dianshan Lake and has certain reference for the retrieval of Chl-a concentration in other third-class water bodies. Thus, it has certain practical significance. Moreover, joint retrieval in two phases can allow us to overcome the shortcomings associated with a lack of measured data and provides new ideas for water quality monitoring over a large area. Author Contributions Conceptualization, C.-Y.Q.; methodology, C.-Y.Q.; validation, W.-D.Z. and N.-Y.H.; formal analysis, W.-D.Z. and Y.-W.L.; resources, Y.-X.K.; data curation, Z.-Y.Z.; writing—original draft preparation, C.-Y.Q.; writing—review and editing, W.-D.Z.; visualization, C.-Y.Q.; supervision, Y.-W.L.; project administration, N.-Y.H.; funding acquisition, W.-D.Z. and N.-Y.H. All authors have read and agreed to the published version of the manuscript. This work was supported by the National Key R&D Program of China (2016YFC1400904) and the scientific innovation program project by the Shanghai Committee of Science and Technology (Grant No. Institutional Review Board Statement Not applicable. Informed Consent Statement Informed consent was obtained from all subjects involved in the study. Data Availability Statement Data Availability Statement: Data are available from the corresponding author upon request and subject to the Human Subjects protocol restrictions. Thank you to United States Geological Survey. Conflicts of Interest The authors declare no conflict of interest. 1. Song, T.; Zhou, W.; Liu, J.; Gong, S.; Shi, J.; Wu, W. Evaluation on distribution of chlorophyll-a content in surface water of Taihu Lake by hyperspectral inversion models. Acta Sci. Circumstantiae 2017, 37, 888–899. [Google Scholar] 2. Cheng, C.; Li, Y.; Ding, Y.; Tu, Q.; Qin, P. Remote Sensing Estimation of Chlorophyll-a and Total Suspended Matter Concentration in Qiantang River Based on GF-1/WFV Data. J. Yangtze River Sci. Res. Inst. 2019, 36, 21–28. [Google Scholar] 3. Honeywill, C.; Paterson, D.M.; Hegerthey, S.E. Determination of microphytobenthic biomass using pulse-amplitude modulated minimum fluorescence. Eur. J. Phycol. 2002, 37, 485–492. [Google Scholar] 4. Zhu, G.; Xu, H.; Zhu, M.; Zou, W.; Guo, C.; Ji, P.; Da, W.; Zhou, Y.; Zhang, Y.; Qin, B. Changing characteristics and driving factors of trophic state of lakes in the middle and lower reaches of Yangtze River in the past 30 year. J. Lake Sci. 2019, 31, 1510–1524. [Google Scholar] 5. Kang, L. Study on Eutrophication Process and Water Ecological Effect of Dianshan Lake. Environ. Sci. Manag. 2020, 45, 171–174. [Google Scholar] 6. Behmel, S.; Damour, M.; Ludwig, R.; Rodriguez, M.J. Water quality monitoring strategies—A review and future perspectives. Sci. Total Environ. 2016, 517, 1312–1329. [Google Scholar] [CrossRef] [ 7. Le, C.; Li, Y.; Sun, D.; Wang, H.; Huang, C. Spatio-temporal Distribution of Chl-a Concentration, and Its Estimation in Taihu Lake. Environ. Sci. 2008, 29, 619–626. [Google Scholar] 8. Tian, Y.; Guo, Z.; Qiao, Y.; Lei, X.; Xie, F. Remote sensing of water quality monitoring in Guanting Reservoir. Acta Ecol. Sin. 2015, 35, 2217–2226. [Google Scholar] 9. Li, J.; Pei, Y.; Zhao, S.; Xiao, R.; Sang, X.; Zhang, C. A review of remote sensing for environmental monitoring in China. Remote Sens. 2020, 12, 1130. [Google Scholar] [CrossRef] [Green Version] 10. Zhang, Y.; Zhang, Y.; Cha, Y.; Shi, K.; Zhou, Y.; Wang, M. Remote sensing estimation of total suspended matter concentration in Xin’anjiang reservoir using Landsat 8 data. Environ. Sci. 2015, 36, 56–63. [Google Scholar] 11. Dan, Y.; Zhou, Z.; Li, S.; Zhang, H.; Jiang, Y. Inversion of Chlorophyll-a Concentration in Pingzhai Reservoir Based on Sentinel-2. Environ. Eng. 2020, 38, 180–185. [Google Scholar] 12. Huang, Y.; Jiang, D.; Zhuang, D.; Fu, J. Research on remote sensing estimation of chlorophyll concentration in water body of Tangxun Lake. J. Nat. Disasters 2012, 21, 215–222. [Google Scholar] 13. Mu, B.; Cui, T.; Cao, W.; Qin, P.; Zheng, R.; Zhang, J. A Semi-analytical monitoring method during the process of red tide based on optical buoy. Acta Opt. Sin. 2012, 32, 8–16. [Google Scholar] 14. Li, Y.; Huang, J.; Wei, Y.; Lu, W. Inversing Chlorophyll concentration of Taihu Lake by ana lytic model. J. Remote Sens. 2006, 10, 169–175. [Google Scholar] 15. Yang, X.; Jiang, Y.; Deng, X.; Zheng, Y.; Yue, Z. Temporal and spatial variations of Chlorophyll a concentration and eutrophication assessment (1987–2018) of Donghu Lake in Wuhan using Landsat images. Water 2020, 12, 2192. [Google Scholar] [CrossRef] 16. Zhang, L.; Dai, X.; Bao, Y.; Wu, J.; Yu, C. Inversion of chlorophyll-a concentration based on TM images in Wuliangsuhai Lake. Environ. Eng. 2015, 33, 133–138. [Google Scholar] 17. Liu, W.; Deng, R.; Liang, Y.; Wu, Y.; Liu, Y. Retrieval of chlorophyll-a concentration in Chaohu based on radiative transfer model. Remote Sens. Land Resour. 2019, 31, 102–110. [Google Scholar] 18. Xie, T.; Chen, Y.; Lu, W. Retrieval of chlorophyll-a in the lower reaches of the Minjiang River via three-band bio-optical model. Lasers Optoelectron. Prog. 2020, 57, 1–8. [Google Scholar] 19. Pan, C.; Xia, L.; Wu, Z.; Wang, M.; Xie, X.; Wang, F. Remote sensing retrieval of chlorophyll-a concentration in coastal aquaculture areaof Zhelin Bay. J. Trop. Oceanogr. 2020, 40, 142–153. [ Google Scholar] 20. Wu, Y.; Deng, R.; Qin, Y.; Liang, Y.; Xiong, L. The study of characteristics for Chlorophyll Concentration derived remote sensing in Xinfengjiang Reservoir. Remote Sens. Technol. Appl. 2017, 32, 825–834. [Google Scholar] 21. Zhang, Y.; Pulliainen, J.T.; Koponen, S.S.; Martti, T.H. Water quality retrievals from combined Landsat TM data and ERS-2 SAR data in the Gulf of Finland. IEEE Trans. Geosci. Remote Sens. 2003, 41, 622–629. [Google Scholar] [CrossRef] 22. Ye, H.; Tang, S.; Yang, C. Deep learning for Chlorophyll-a concentration retrieval: A case study for the Pearl River Estuary. Remote Sens. 2021, 13, 3717. [Google Scholar] [CrossRef] 23. Xue, L.; Jian, S.; Zhong, L. Chlorophyll-A Prediction of Lakes with Different Water Quality Patterns in China Based on Hybrid Neural Networks. Water 2017, 9, 524. [Google Scholar] [CrossRef] [ Green Version] 24. Zhang, X.; Zheng, X. Discussion on retrieval method of surface Chlorophyll concentration in Bohai Bay based on BP neural network. J. Ocean Technol. 2018, 37, 79–87. [Google Scholar] 25. Zhu, Y.; Zhu, L.; Li, J.; Chen, Y.; Zhang, Y.; Hou, H.; Ju, X.; Zhang, Y. The study of inversion of Chl-a in Taihu based on GF-1 WFV image and BP neural network. Acta Sci. Circumstantiae 2017, 37 , 130–137. [Google Scholar] 26. Zhang, Y.; Zhang, D.; Sun, Z. Water quality and water environmental assessment of Dianshan Lake in Shangha. J. Water Resour. Water Eng. 2017, 28, 90–96. [Google Scholar] 27. Wang, S.; Qian, X.; Zhao, G.; Zhang, W.; Zhao, Y.; Fan, Z. Contribution analysis of Pollution Sources Around Dianshan Lake. Resour. Environ. Yangtze Basin 2013, 22, 331–336. [Google Scholar] 28. Li, Z.; Lu, J.; Wang, G.; Ge, X. Comparison of measurement of phytoplankton Chlorophyll-a concentration by spectrophotometry. Environ. Monit. China 2006, 22, 21–23. [Google Scholar] 29. Ma, H.; Liu, S. The Potential Evaluation of Multisource Remote Sensing Data for Extracting Soil Moisture Based on the Method of BP Neural Network. Can. J. Remote Sens. 2016, 42, 117–124. [Google Scholar] [CrossRef] 30. Wang, S.J.; Guan, D.S. Remote Sensing Method of Forest Biomass Estimation by Artificial Neural Network Models. Ecol. Environ. 2007, 16, 108–111. [Google Scholar] 31. Hinton, G.E. Reducing the dimensionality of data with neural networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef] [PubMed] [Green Version] 32. Fan, Y.; Liu, J. Water depth remote sensing retrieval model based on Artificial Neural Network Techniques. Hydrogr. Surv. Charting 2015, 165, 20–23. [Google Scholar] 33. Xu, P.; Cheng, Q.; Jin, P. Inversion of Chlorophyll-a of clean water in Qiandao Lake With remote sensing data using the neural network. Resour. Environ. Yangtze Basin 2021, 30, 1670–1679. [Google 34. Zhou, L.; Gu, X.; Zeng, Q.; Zhou, W.; Mao, Z.; Sun, M. Applications of Back Propagation Neural Network for Short-term Prediction of Chlorophyll-a Concentration in Different Regions of Lake Taihu. J. Hydroecol. 2012, 33, 1–6. [Google Scholar] Figure 1. Geographical location of the study area and distribution of sampling points: (a) Study area location; and (b) sampling point locations. Figure 5. Four models constructed by curve estimation analysis: (a) Linear function model (b) quadratic function model (c) cubic function model; and (d) exponential function model. Figure 7. Retrieval map of Chl-a in Dianshan Lake: (a) Chl-a retrieval concentration distribution map obtained for 22 December 2020; and (b) Chl-a retrieval concentration distribution map obtained for 14 November 2021. Sampling Point ID Longitude (°) Latitude (°) Chl-a (μg/L) ID1 120.9733 31.09405 4.11 ID5 120.9301 31.1049 2.00 ID9 120.9104 31.08038 1.42 ID16 120.9863 31.11427 1.47 ID21 120.947 31.13484 2.27 ID25 120.9523 31.10922 5.47 ID34 120.9283 31.07528 8.63 ID39 120.9294 31.09667 10.54 ID40 120.9364 31.09333 10.95 ID46 120.9297 31.10833 12.65 ID47 120.9472 31.09389 11.70 ID56 120.9778 31.09667 13.08 ID58 120.9636 31.10917 9.80 ID75 120.9669 31.1325 10.95 ID76 120.9761 31.125 13.07 ID77 120.9831 31.12028 10.95 Date Maximum Value (μg/L) Minimum Value (μg/L) Average Value (μg/L) 21 December 2020 6.84 0.95 3.15 14 November 2021 16.56 7.66 10.91 All 16.56 0.95 8.29 Band Name Band Range (μm) Spatial Resolution (m) Band1 Coastal 0.433–0.453 30 Band2 Blue 0.450–0.515 30 Band3 Green 0.525–0.600 30 Band4 Red 0.630–0.680 30 Band5 NIR 0.845–0.885 30 Band6 SWIR1 1.560–1.660 30 Band7 SWIR2 2.100–2.300 30 Band8 PAN 0.500–0.680 15 Band9 Cirrus 1.360–1.390 30 Band Correlation Band1 −0.59 Band2 −0.36 Band3 −0.12 Band4 −0.10 Band5 −0.01 Band6 −0.16 Band7 −0.46 Combination Methods Correlation Combination Correlation Combination Methods Correlation Coefficient Methods Coefficient Coefficient B1 + B7 −0.60 IN(B3/B2) 0.56 B3/B1/(B1 + B7) 0.79 B1 − B2 −0.71 IN(B3/B1) 0.69 B3/B1/(B1 − B7) 0.78 B1 − B3 −0.53 IN(B4/B1) 0.54 (B1 − B3)/(B7 − B1) 0.71 B1 − B7 −0.58 IN(B1)/IN(B7) 0.56 (B1 − B3)/(B1 − B7) −0.71 B2 − B1 0.71 IN(B2)/IN(B1) −0.51 (B1 − B3)/(B1 + B7) −0.70 B3 − B1 0.53 IN(B1)/(B1 + B7) −0.57 (B2 − B1)/(B7 − B1) −0.66 B7 − B1 0.58 IN(B1)/(B1 − B7) −0.56 (B2 − B1)/(B1 − B7) 0.66 B1 × B7 −0.63 IN(B1)/(B7 − B1) 0.56 (B2 − B1)/(B1 + B7) 0.67 B1/B4 −0.53 IN(B3/B1/(B1 + B7)) 0.79 (B3 − B1)/(B7 − B1) −0.71 B1/B3 −0.66 IN(B3/B1/(B1 − B7)) 0.77 (B3 − B1)/(B1 − B7) 0.71 B1/B2 −0.66 B3/B1/(B7 − B1) −0.78 (B3 − B1)/(B1 + B7) 0.70 B2/B3 −0.50 (B1 + B7)/B3/B1 −0.78 (B1 − B2)/(B7 − B1) 0.66 B2/B1 0.67 (B1 − B2)/B3/B1 −0.71 (B1 − B2)/(B1 − B7) −0.66 B3/B2 0.60 (B1 − B7)/B3/B1 −0.76 (B1 − B2)/(B1 + B7) −0.67 B3/B1 0.71 (B1 − B7)/B3/B1 0.71 (B1 + B7) + (B1 − B2) −0.73 IN(B1) −0.59 (B7 − B1)/B3/B1 0.76 (B1 + B7) + (B1 − B3) −0.80 IN(B1−B7) −0.57 (B1 + B7) − B3/B1 −0.80 (B1 + B7) + (B1 − B7) −0.59 IN(B1/B4) −0.54 (B1 − B2) − B3/B1 −0.72 (B1 − B2) + (B1 − B3) −0.61 IN(B1/B3) −0.69 (B1 − B3) − B3/B1 −0.65 (B1 − B3) + (B1 − B7) −0.79 IN(B1/B2) −0.66 (B1 − B7) − B3/B1 −0.80 (B2 − B1) + (B3 − B1) 0.61 IN(B2/B3) −0.56 (B2 − B1) − B3/B1 −0.66 (B2 − B1) + (B7 − B1) 0.72 IN(B2/B1) 0.66 (B3 − B1) − B3/B1 −0.80 (B3 − B1) + (B7 − B1) 0.79 “IN” stands for exponential operation. Combination Methods Combination Methods Combination Methods B1/B3 B2/B1 B3/B1 B1 − B2 B1 − B7 B2 − B1 IN(B1/B2) IN(B1/B3) IN(B3/B1) IN(B1)/(B1 + B7) IN(B1)/(B1 − B7) IN(B1)/(B7 − B1) (B1 + B7)/B3/B1 (B1 − B7)/B3/B1 (B7 − B1)/B3/B1 B3/B1/(B1+B7) B3/B1/(B1 − B7) B3/B1/(B7 − B1) (B1 + B7) − B3/B1 (B1 − B7) − B3/B1 (B3 − B1) − B3/B1 (B1 − B3)/(B1 − B7) (B3 − B1)/(B7 − B1) (B3 − B1)/(B1 − B7) (B1 + B7) + (B1 − B3) (B1 − B3) + (B1 − B7) (B3 − B1) + (B7 − B1) Model Input Variable Removed Variable Method 1 (B1 − B3) + (B1 − B7), (B3 − B1) − B3/B1, (B1 + B7) + (B1 − B3) ^b (B1 + B7) − B3/B1, (B1 − B7) − B3/B1 Enter ^b “Tolerance = 0.000” limit reached. Model R R-Squared Adjusted R-Squared Error in Standard Estimation 1 0.782 ^a 0.611 0.591 2.629596385113078 ^a Predictors: (constant), (B1 − B3) + (B1 − B7), (B3 − B1) − B3/B1, (B1 + B7) + (B1 − B3). Unstandardized Standardized Coefficient Model Coefficients t Salience B Standard Error Beta Constant −12.949 44.674 −0.290 0.773 1 (B3 − B1) − B3/B1 −29.212 33.787 −0.397 −0.865 0.391 (B1 + B7) + (B1 − B3) −71.715 46.722 −2.218 −1.535 0.130 (B1 − B3) + (B1 − B7) 61.410 46.942 1.840 1.308 0.196 Independent Variable Model Fitting Equation R−Squared Linear function Chl-a = − 17.445x − 1.9969 0.60 (B1 + B7) − B3/B1 Quadratic function Chl-a = 12.581x^2 − 3.9426x + 1.1729 0.61 Cubic function Chl-a = 141.29x^3 + 232.44x^2 + 101.5x + 16.359 0.65 Exponential function Chl-a = 1.0733e^−3.121x 0.58 In the fitting equation, Chl-a represents the retrieval result, x is the independent variable (B1 + B7) − B3/B1,B1, B3 and B7 represent the remote sensing reflectivity of Landsat−8 remote sensing data Band1, Band3 and Band7. Band Combination Band Combination Index Model—Multiple Model—Curve Estimation Analysis BP Neural Network Model Regression Analysis R-Squared 0.80 0.87 0.86 RMSE (μg/L) 2.08 1.72 1.69 MRE (%) 23.62 22.45 19.48 Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https: Share and Cite MDPI and ACS Style Zhu, W.-D.; Qian, C.-Y.; He, N.-Y.; Kong, Y.-X.; Zou, Z.-Y.; Li, Y.-W. Research on Chlorophyll-a Concentration Retrieval Based on BP Neural Network Model—Case Study of Dianshan Lake, China. Sustainability 2022, 14, 8894. https://doi.org/10.3390/su14148894 AMA Style Zhu W-D, Qian C-Y, He N-Y, Kong Y-X, Zou Z-Y, Li Y-W. Research on Chlorophyll-a Concentration Retrieval Based on BP Neural Network Model—Case Study of Dianshan Lake, China. Sustainability. 2022; 14 (14):8894. https://doi.org/10.3390/su14148894 Chicago/Turabian Style Zhu, Wei-Dong, Chu-Yi Qian, Nai-Ying He, Yu-Xiang Kong, Zi-Ya Zou, and Yu-Wei Li. 2022. "Research on Chlorophyll-a Concentration Retrieval Based on BP Neural Network Model—Case Study of Dianshan Lake, China" Sustainability 14, no. 14: 8894. https://doi.org/10.3390/su14148894 Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details Article Metrics
{"url":"https://www.mdpi.com/2071-1050/14/14/8894","timestamp":"2024-11-05T00:45:33Z","content_type":"text/html","content_length":"474903","record_id":"<urn:uuid:104a81b3-3ad3-4edb-aea2-e568bfcbeb6c>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00312.warc.gz"}
The objectives of the study are to integrate the conditional Latin The objectives of the study are to integrate the conditional Latin Hypercube Sampling (cLHS), sequential Gaussian simulation (SGS) and spatial analysis in remotely sensed images, to monitor the effects of large chronological disturbances on spatial characteristics of landscape changes including spatial heterogeneity and variability. of multiple NDVI images present a very BMS-794833 robust behavior, which advocates the use of the index for the quantification of the landscape spatial patterns and land cover change. In addition, the results transferred by Open Geospatial techniques can be accessed from end-user and web-based applications of the watershed management. is the lag distance that separates pairs of points; + + << is the number of that falls between quantiles and is the proportion of class j in Z. To ensure that the correlation of the sampled variables shall replicate the original data, another objective function is added: is the change in the objective function, and T is a cooling temperature (between 0 and 1), which is decreased by a factor d during each iteration. Generate a uniform random number between 0 and 1. If < < and replace it with a random site(s) from unsampled sites r. End when the value of P is between 0 and 1, indicating that the probability of the search is a random search or systematically replacing the samples that have the worst fit with the strata. Go to step 3. Repeat steps 3C7 until the objective function value falls beyond a given stop criterion or a specified number of iterations. 2.6. Sequential Gaussian Simulation The SGS assumes a Gaussian random field, such that the mean value and covariance characterize the conditional cumulative density function [56] completely. During the PRKACA SGS process, Gaussian transformation of available measurements is simulated, such that each simulated value is conditional on original data and all previously simulated values [21,57]. A value simulated at a one location is randomly selected from the normal distribution function defined by the kriging mean and variance based on neighborhood values. Finally, simulated normal values are back-transformed into simulated values to yield the original variable. The simulated value at the new randomly visited point value depends on both original data and previously simulated values. This process is repeated until all true points have been simulated. In sequential simulation algorithm, modeling of the N-point cumulative density function (ccdf) is a sequence of N univariate BMS-794833 ccdfs at each node (grid cell) along a random path [58]. The sequential simulation algorithm has the following steps [58]: Establish a random path that is visited once and only once, all nodes = 1, , N discretizing the domain of interest Doman. A random visiting sequence ensures that no spatial continuity artifact is introduced into the simulation by a specific path visiting N nodes. At the first visited N nodes (= 1,, ({((+ 1, to be used for all subsequent local ccdf determinations. At the ith node along the random path: Model the local ccdf of ? {1 near previously simulated values 1 near simulated values = 1 previously,, ? 1: + i. Repeat step 3 until all N nodes along the random path are visited. 2.7. Morans I Spatial autocorrelation BMS-794833 is a useful tool for describing the dependency of spatial patterns. First, spatial structures are described by so-called structure functions [25,59].Morans I, which ranges between ?1 and +1, is a well known spatial autocorrelation method [60]. The index, I, is calculated as follows: (7) where yh and yi denote the values of the observed variable at sites h and I, respectively; and whi denotes the weight of the variable. The weights, wij, are written in an (n BMS-794833 n) weight matrix W, which is the sum of the weights whi for a given distance class [61]. Morans I is positive and high when a value is similar to adjacent values, and low when a value is dissimilar to adjacent values. In this paper, the global Morans I value for the NDVI was calculated to compare the spatial relations of the NDVI among various events. As a total result, the phenomenon of spatial autocorrelation of NDVI could be tested. 3.?Discussion and Results 3.1. Statistics and Spatial Analysis of NDVI Images The NDVI is one of the most popular methods for monitoring vegetation conditions. It has been reported that multitemporal NDVI is useful for classifying land cover and the dynamics of vegetation [19,62,63]. However, the earthquakes and typhoons is a major natural disturbance to land cover change in Taiwan. For example, the Chi-Chi earthquake led to landslides, dammed lakes and a high death toll. Like the typhoons, subsequent rainstorms cause divergent destruction of vegetation;.
{"url":"https://www.buyresearchchemicalss.net/2017/08/the-objectives-of-the-study-are-to-integrate-the-conditional-latin/","timestamp":"2024-11-11T11:37:30Z","content_type":"text/html","content_length":"59164","record_id":"<urn:uuid:9282f9e3-12a7-4790-acfb-549c737d80f1>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00396.warc.gz"}
How to calculate a median? The median is a measure of central tendency of a distribution and represents the midpoint of an ordered dataset. Unlike the average, which considers every value in the dataset, the median is influenced by the position of the data and not their actual values. This makes the median a more robust measure than the average, as it is less sensitive to outliers. Steps to calculate the median To calculate the median, we follow the steps: 1. Order the dataset: Arrange all values in the dataset from low to high. If your series of numbers is {8, 3, 5, 4, 9, 1}, you order the values as {1, 3, 4, 5, 8, 9}. 2. Find the middle of the dataset: Check if the number of data (n) is even or odd. □ For an odd number of data: The median is the value in the middle of the ordered sequence. In our series of six numbers {1, 3, 4, 5, 8, 9}, there is no single number exactly in the middle, as we have an even number of numbers. □ For an even number of data: The median is the average of the two middle numbers. In our dataset, 4 and 5 are the middle numbers. We calculate their average as (4 + 5)/2 = 4.5. So, the median is 4.5. Suppose we have the following series of numbers: 4, 8, 6, 5, 3, 2, 8, 9, 2, 5. 1. We first arrange the numbers in ascending order: 2, 2, 3, 4, 5, 5, 6, 8, 8, 9. 2. We see that the number of numbers in the series is 10, which is an even number. 3. Since we have an even number of numbers, we take the two middle numbers (5 and 5 in this case) and calculate the average. In this case, the median is also 5. The median gives us a valuable measure of central tendency that gives a more complete view of our data, especially when combined with other measures such as the average and mode. Moreover, as the median is less sensitive to extreme values, it can in many cases provide a more accurate picture of the 'typical' value in a dataset.
{"url":"https://www.clcl8r.com/en/calculate/statistics/median-calculator.php","timestamp":"2024-11-14T04:05:29Z","content_type":"text/html","content_length":"26289","record_id":"<urn:uuid:246fed4a-e6e4-4c89-a1ac-0d8250109754>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00517.warc.gz"}
Confinement of Special Reinforced Concrete Moment Frame Columns Requirements of ACI 318-14 The American Concrete Institute (ACI) published the Building Code Requirements for Structural Concrete (ACI 318-14) and Commentary (ACI 318R-14) in the Fall of 2014. ACI 318-14 has been adopted by reference into the 2015 International Building Code (IBC). There are very significant organizational as well as technical changes between ACI 318-11 and ACI 318-14. A two-part article on the changes was published in the April and May 2016 issues of STRUCTURE magazine. A follow-up article on one of the most significant technical changes – the seismic design provisions for special (meaning specially detailed) shear walls – was published in the July 2016 issue. This is the last follow-up article on another critical change in the requirements for the confinement of columns in special moment frames of reinforced concrete. Introduction to the Changes The ability of the concrete core of a reinforced concrete column to sustain compressive strains tends to increase with confinement pressure. Compressive strains caused by lateral deformation are additive to the strains caused by the axial load. It follows that confinement reinforcement should be increased with the axial load to ensure consistent lateral deformation capacity. The dependence of the amount of required confinement on the magnitude of the axial load imposed on a column has been recognized by some codes from other countries (such as Canada’s CSA A23.3-14 and New Zealand’s NZS 3101-06) but was not reflected in ACI 318 through its 2011 edition. The ability of confining steel to maintain core concrete integrity and increase deformation capacity is also related to the layout of the transverse and longitudinal reinforcement. Longitudinal reinforcement that is well distributed and laterally supported around the perimeter of a column core provides more effective confinement than a cage with larger, widely spaced longitudinal bars. Confinement effectiveness is a key parameter in determining the behavior of confined concrete (Mander et al. 1988) and has been incorporated in the CSA A23.3-14 equation for column confinement. ACI 318, through its 2011 edition, did not explicitly account for confinement effectiveness in determining the required amount of confinement. It instead assumed the same confinement effectiveness independent of how the reinforcement is distributed. Figure 1. Confinement of rectangular column of special moment frame. Given the above, confinement requirements for columns of special moment frames (Section 18.7.5, Figure 1), with high axial load (P[u] > 0.3A[g] f’[c]) or high concrete compressive strength (f’[c] > 10,000 psi) are significantly different in ACI 318-14. The following excerpt from Sheikh et al. explains why high-strength concrete columns are grouped with highly axially loaded columns: “For the same amount of tie steel, the flexural ductility of HSC [High Strength Concrete] columns was significantly less than that of comparable NSC [Normal Strength Concrete] specimens tested under similar P/f’[c] A[g] values. For the same percentage of the confining steel required by the ACI Building Code, NSC columns displayed better ductility than comparable HSC columns tested under similar P/f’[c] A[g]. However, for the same level of axial load measured as a fraction of P[o] (the ultimate axial load capacity), HSC and NSC columns behaved similarly in terms of energy-absorption characteristics when the amount of tie steel in the columns was in proportion to the unconfined concrete strength. Conversely, the amount of confining steel required for a certain column performance appears to be proportional to the concrete strength as long as the applied axial load is measured in terms of P[o] rather than P/f’[c] A[g].” The discussion below is about confinement over the length l[o], the region of potential plastic hinging. One important new requirement is as follows: 18.7.5.2 – Transverse reinforcement shall be in accordance with (a) through Where P[u] > 0.3A[g] f’[c] or f’[c] > 10,000 psi in columns with rectilinear hoops, every longitudinal bar or bundle of bars around the perimeter of the column core shall have lateral support provided by the corner of a hoop or by a seismic hook, and the value of h[x] shall not exceed 8 in. (Figure 2). P[u] shall be the largest value in compression consistent with factored load combinations including E. Figure 2. Confinement of high-strength or highly-axially-loaded rectangular column of special moment frame. The change from prior practice is that instead of every other longitudinal bar having to be supported by a corner of a tie or a crosstie, every longitudinal bar will have to be supported when either the axial load on a column is high, or the compressive strength of the column concrete is high. Also, the hooks at both ends of a crosstie need to be 135-deg. As importantly or perhaps more importantly, the center-to-center spacing between laterally supported bars is restricted to a short 8 inches. In the absence of high-strength concrete or high axial loading, the maximum spacing goes up to 14 inches. In ACI 318-11 and prior editions, the 14-inch limitation used to apply to the center-to-center spacing between legs of hoops and crossties. The other new requirement is in the following section: 18.7.5.4 – Amount of transverse reinforcement shall be in accordance with Table 18.7.5.4 (reproduced here as Table 1). Table 1. (ACI 318-14 Table 18.7.5.4). Confinement of high-strength or highly-axially-loaded rectangular column of special moment frame. The concrete strength factor, k[f] , and confinement effectiveness factor, k[n], are calculated by (a) and (b). k[f] = [pmath]{f prime_c}/{25,000}[/pmath] + 0.6 ≥ 1.0 (18.7.5.4a) k[n] = [pmath]{n_l}/{n_l – 2}[/pmath] (18.7.5.4b) Where n[l] is the number of longitudinal bars or bar bundles around the perimeter of a column core with rectilinear hoops that are laterally supported by the corner of hoops or by seismic hooks. See Tables 2 and 3 for values of k[f] and k[n], respectively, calculated by the above formulas. Table 2. Values of concrete strength factor, kf . Table 3. Values of confinement effectiveness factor, kn. Impact of Changed Confinement Requirements As is seen above, for columns that are made of concrete with specified compressive strength, f’[c], exceeding 10,000 psi and/or are subject to factored axial force, P[u], exceeding 0.3A[g] f’[c] (A [g] = gross cross-sectional area), the required confinement over regions of potential plastic hinging (typically at the two ends) is now a function of the axial force. The impact of the changed requirements is assessed in Table 4. Table 4. Impact of the changed confinement requirements of ACI 318-14 for the regions of potential plastic hinging of special moment frame columns. Bars larger than No. 6 in size are not very practical for use as transverse reinforcement. Also, the ensemble of one hoop and crossties in two orthogonal directions has a thickness of 2¼ inches for No. 6 bar size, which translates into a 1¾-inch clear spacing for a 4-inch center-to-center spacing. Thus, Table 4 shows the limitations on sustainable axial load as the specified compressive strength goes beyond 6 ksi. The limitations have become significantly more severe under ACI 318-14. It should be noted that ACI 318 does not allow Pu to exceed 0.8 (accidental eccentricity factor) x 0.65 (φ for columns with discrete transverse reinforcement) x P[o] = 0.525 P[o], where P[o] = A[g] f’[c] + A[st] (f[y] – f’[c]) So, 0.5 f’[c] A[g] is an extremely high axial load level, which is unlikely to be encountered in special moment frame columns. Also, if one needs to go beyond the range of factored axial loads and concrete strengths that can be accommodated with No. 6 transverse reinforcement at a reasonable spacing, the most effective solution is to switch to transverse reinforcement with yield strength, f [yt], higher than 60 ksi. ACI 318 allows f[yt] to be up to 100 ksi. This article discusses the modified ACI 318-14 confinement requirements for columns of special moment frames. It is shown that the modified requirements have a significant impact on columns that are highly axially loaded (P[u] > 0.3A[g] f’[c]) or made of high-strength concrete (f’[c] > 10,000 psi) or both.▪ Grateful acknowledgments are due to Pro Dasgupta and Ali Hajihashemi of S. K. Ghosh Associates Inc. for their considerable help with the paper. A[ch] = cross-sectional area of a member measured to the outside edges of transverse reinforcement A[sh] = total cross-sectional area of transverse reinforcement, including crossties, within spacing s and perpendicular to dimension b[c] A[st] = total area of non-prestressed longitudinal reinforcement b[c] = cross-sectional dimension of member core measured to the outside edges of the transverse reinforcement composing area A[sh] f’[c] = specified compressive strength of concrete f[y] = specified yield strength for non-prestressed reinforcement f[yt] = specified yield strength of transverse reinforcement h[1] = plan dimension of column in one of two orthogonal directions h[2] = plan dimension of column in other orthogonal direction h[x] = maximum center-to-center spacing of longitudinal bars laterally supported by corners of crossties or hoop legs around the perimeter of the column k[f] = concrete strength factor k[n] = confinement effectiveness factor ℓ[o] = length, measured from joint face along axis of member, over which special transverse reinforcement must be provided n[l] = number of longitudinal bars around the perimeter of a column core with rectilinear hoops that are laterally supported by the corner of hoops or by seismic hooks P[o] = nominal axial strength at zero eccentricity P[u] = factored axial force s = center-to-center spacing of transverse reinforcement x[i] = dimension from centerline to centerline of legs of hoops or crossties (ACI 318-14), of laterally supported longitudinal bars (ACI 318-14) Portions of this article were originally published in the PCI Journal (March/April 2016), and this extended version is reprinted with permission. ACI (American Concrete Institute) Committee 318, Building Code Requirements for Structural Concrete (ACI 318-14) and Commentary (ACI 318R-14), Farmington Hills, MI, 2014. ICC (International Code Council), International Building Code. Washington, DC, 2015. CSA (Canadian Standards Association), Design of Concrete Structures (CSA A23.3-14), Etobicoke, Ontario, Canada, 2014. Standards New Zealand, Concrete Structures Standard (NZS 3101: Part 1: 2006) and Concrete Structures Standard – Commentary (NZS 3101: Part 2: 2006), Wellington, New Zealand, 2006. Mander, J. B., Priestley, M. J. N., and R. Park, R., “Theoretical Stress-Strain Model for Confined Concrete.” Journal of Structural Engineering, Vol. 114, No. 8, 1988, pp. 1804–1825. Sheikh, S. A., Shah, D.V., and Khoury, S. S., “Confinement of High-Strength Concrete Columns,” ACI Structural Journal, V. 91, No. 1, January – February 1994, pp. 100-111.
{"url":"https://www.structuremag.org/fr/article/confinement-of-special-reinforced-concrete-moment-frame-columns/","timestamp":"2024-11-14T10:14:14Z","content_type":"text/html","content_length":"108045","record_id":"<urn:uuid:81373c27-0a3f-4e8e-9b18-04dc55418f64>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00462.warc.gz"}
Re: st: Binomial regression [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: Binomial regression From Marcello Pagano <[email protected]> To [email protected] Subject Re: st: Binomial regression Date Fri, 03 Aug 2007 11:27:55 -0400 Convenience should not be the determining factor, rather which model best fits the data should be what governs our choices. Tim Wade wrote: Thanks Constantine for sharing your results. There are certainly cases where the risk difference is a more appropriate or desirable measure. In these cases, the identity link does have the significant advantage of being linear on the probability scale instead of the log odds scale. And while we can convert log-odds to probability, it is often convenient, especially when we have a multivariate model to be able to interpret the regression coefficients as the expected change in the probability (i.e, the risk difference) holding other factors constant. Since probability is not linear in the logistic model, similar inferences about probability or risk differences cannot be made with the logistic model. If one wants to make a statement about risk differences from multivariate logistic model, it needs to be with regard to holding the other covariates at some specific constant value, such as zero or at their mean values. On 8/2/07, Marcello Pagano <[email protected]> wrote: Sorry to disagree with your first sentence, Constantine. Logistic regression stipulates a linear relationship of covariates with the log of the odds of an event (not odds ratios). From this it is straightforward to recover the probability (or risk, if you prefer that label) of the event. Don't understand your aversion to logistic regression to achieve what you want to achieve. If you don't like the shape of the logistic, then any other cdf will provide you with a transformation to obey the constraints inherent in modeling a probability. The uniform distribution that you wish to use has to be curtailed, as others have pointed out. Constantine Daskalakis wrote: No argument about logistic regression. But that gives you odds ratios. What if you want risk differences instead? * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"https://www.stata.com/statalist/archive/2007-08/msg00144.html","timestamp":"2024-11-05T16:44:27Z","content_type":"text/html","content_length":"11685","record_id":"<urn:uuid:5780c5a9-63c5-4a92-a898-a18e6c194018>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00647.warc.gz"}
Explicit expression for decryption in a generalisation of the Paillier scheme The Paillier scheme encryption, (m, r) → c=g^m r^N mod N^2, where m is in Z[N], r is in Z[N] ^*, N=pq (p, q being strong primes) and g is an element of Z^*[N]^2 of order a multiple of N, is decrypted by mmodN= (L(c^λ mod N^2)/L(g ^λ mod N^2)) mod N, where L is defined on all u in Z^*[N]^2 such that umodN = 1, by L(u)=(u-1)/N. In the generalisation of the scheme by Damgård and Jurik, the modulus N ^2 is replaced by N^1+s, 1 ≤ s < p, q, but an explicit expression for decryption was not given. Rather a method, the only one known so far, was found for decryption, by first encoding the ciphertext and then using an algorithm of a quadratic order of complexity in s to extract the plaintext part by part therefrom. This gap is filled. An explicit expression for decryption in this setting is presented, which is more straight forward, linear in s in complexity and hence more efficient and reduces to the original Paillier L function for s=1. Dive into the research topics of 'Explicit expression for decryption in a generalisation of the Paillier scheme'. Together they form a unique fingerprint.
{"url":"https://research.brighton.ac.uk/en/publications/explicit-expression-for-decryption-in-a-generalisation-of-the-pai","timestamp":"2024-11-03T07:29:38Z","content_type":"text/html","content_length":"51851","record_id":"<urn:uuid:1bf329bc-673c-4ed5-aee3-267cc6d6f04c>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00005.warc.gz"}
[9:30am] Prof. Eduard Feireisl, Czech Academy of Sciences 9:00am Description: Name of the instructor: Prof. Eduard Feireisl. Affiliation: Czech Academy of Sciences. Mode of instruction: via videoconference. Title of the mini-course: Mathematical Aspects of Euler Equations. Venue: A1A2 hall, CDEEP, IIT Bombay. We consider the phenomenon of oscillations in the solution families to partial differential equations. To begin, we briefly discuss the mechanisms preventing oscillations/concentrations and make a short excursion in the theory of compensated compactness. Pursuing the philosophy "everything what is not forbidden is allowed" we show that certain problems in fluid dynamics admit oscillatory solutions. This fact gives rise to two rather unexpected and in a way contradictory results: (i) many problems describing inviscid fluid motion in several space dimensions admit global-in-time (weak solution); (ii) the solutions are not determined uniquely by their initial data. We examine the 10:00am basic analytical tool behind these rather ground breaking results - the method of convex integration applied to problems in fluid mechanics and, in particular, to the Euler system. [11:30am] Prof. Eduard Feireisl, Czech Academy of Sciences 11:00am Description: Name of the instructor: Prof. Eduard Feireisl. Affiliation: Czech Academy of Sciences. Mode of instruction: via videoconference. Title of the mini-course: Mathematical Aspects of Euler Equations. Venue: A1A2 hall, CDEEP, IIT Bombay. We consider the phenomenon of oscillations in the solution families to partial differential equations. To begin, we briefly discuss the mechanisms preventing oscillations/concentrations and make a short excursion in the theory of compensated compactness. Pursuing the philosophy "everything what is not forbidden is allowed" we show that certain problems in fluid dynamics admit oscillatory solutions. This fact gives rise to two rather unexpected and in a way contradictory results: (i) many problems describing inviscid fluid motion in several space dimensions admit global-in-time (weak solution); (ii) the solutions are not determined uniquely by their initial data. We examine the 12:00pm basic analytical tool behind these rather ground breaking results - the method of convex integration applied to problems in fluid mechanics and, in particular, to the Euler system. [3:30pm] Dilip P Patil, IISc Bangalore Commutative Algebra Seminar Title: Some Questions on Hilbert-Samuel functions. Time & Venue: 3:30 - 5 p.m., Room 215 Dates: Thursday, 31th January, 2018. [3:45pm] Sudarshan Gurjar 5:00pm K-Theory Seminar Speaker : Sudarshan Gurjar. Title : Topolgical vector bundles. Time : 3:45 pm - 5:15 pm. Date : Thursday 31st Jan 2019. Venue : Ramanujan Hall.
{"url":"https://www.math.iitb.ac.in/webcal/day.php?date=20190131&friendly=1","timestamp":"2024-11-03T06:08:39Z","content_type":"text/html","content_length":"14569","record_id":"<urn:uuid:db8d62e6-dfd2-47d5-bf20-67a56d89f5eb>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00048.warc.gz"}
Nonlinear Least Squares Nonlinear Least Squares Any Dakota optimization algorithm can be applied to calibration problems arising in parameter estimation, system identification, and test/analysis reconciliation. However, nonlinear least-squares methods are optimization algorithms that exploit the special structure of a sum of the squares objective function [GMW81]. To exploit the problem structure, more granularity is needed in the response data than is required for a typical optimization problem. That is, rather than using the sum-of-squares objective function and its gradient, least-squares iterators require each term used in the sum-of-squares formulation along with its gradient. This means that the \(m\) functions in the Dakota response data set consist of the individual least-squares terms along with any nonlinear inequality and equality constraints. These individual terms are often called residuals when they denote differences of observed quantities from values computed by the model whose parameters are being estimated. The enhanced granularity needed for nonlinear least-squares algorithms allows for simplified computation of an approximate Hessian matrix. In Gauss-Newton-based methods for example, the true Hessian matrix is approximated by neglecting terms in which residuals multiply Hessians (matrices of second partial derivatives) of residuals, under the assumption that the residuals tend towards zero at the solution. As a result, residual function value and gradient information (first-order information) is sufficient to define the value, gradient, and approximate Hessian of the sum-of-squares objective function (second-order information). See formulations for additional details on this approximation. In practice, least-squares solvers will tend to be significantly more efficient than general-purpose optimization algorithms when the Hessian approximation is a good one, e.g., when the residuals tend towards zero at the solution. Specifically, they can exhibit the quadratic convergence rates of full Newton methods, even though only first-order information is used. Gauss-Newton-based least-squares solvers may experience difficulty when the residuals at the solution are significant. Dakota has three solvers customized to take advantage of the sum of squared residuals structure in this problem formulation. Least squares solvers may experience difficulty when the residuals at the solution are significant, although experience has shown that Dakota’s NL2SOL method can handle some problems that are highly nonlinear and have nonzero residuals at the solution. Nonlinear Least Squares Fomulations Specialized least squares solution algorithms can exploit the structure of a sum of the squares objective function for problems of the form: \[\begin{split}\begin{aligned} \hbox{minimize:} & & f(\mathbf{x}) = \sum_{i=1}^{n}[T_i(\mathbf{x})]^2\nonumber\\ & & \mathbf{x} \in \Re^{p}\nonumber\\ \hbox{subject to:} & & \mathbf{g}_L \leq \mathbf {g(x)} \leq \mathbf{g}_U\nonumber\\ & & \mathbf{h(x)}=\mathbf{h}_{t}\label{nls:equation02}\\ & & \mathbf{a}_L \leq \mathbf{A}_i\mathbf{x} \leq \mathbf{a}_U\nonumber\\ & & \mathbf{A}_e\mathbf{x}=\ mathbf{a}_{t}\nonumber\\ & & \mathbf{x}_L \leq \mathbf{x} \leq \mathbf{x}_U\nonumber\end{aligned}\end{split}\] where \(f(\mathbf{x})\) is the objective function to be minimized and \(T_i(\mathbf{x})\) is the i\(^{\mathrm{th}}\) least squares term. The bound, linear, and nonlinear constraints are the same as described previously for (35), Optimization Formulations. Specialized least squares algorithms are generally based on the Gauss-Newton approximation. When differentiating \(f(\mathbf{x})\) twice, terms of \(T_i(\mathbf{x})T''_i(\mathbf{x})\) and \([T'_i(\mathbf{x})]^{2}\) result. By assuming that the former term tends toward zero near the solution since \(T_i(\mathbf{x})\) tends toward zero, then the Hessian matrix of second derivatives of \(f(\mathbf{x})\) can be approximated using only first derivatives of \(T_i(\mathbf{x})\). As a result, Gauss-Newton algorithms exhibit quadratic convergence rates near the solution for those cases when the Hessian approximation is accurate, i.e. the residuals tend towards zero at the solution. Thus, by exploiting the structure of the problem, the second order convergence characteristics of a full Newton algorithm can be obtained using only first order information from the least squares terms. A common example for \(T_i(\mathbf{x})\) might be the difference between experimental data and model predictions for a response quantity at a particular location and/or time step, i.e.: \[T_i(\mathbf{x}) = R_i(\mathbf{x})-\bar{R_i} \label{nls:equation03}\] where \(R_i(\mathbf{x})\) is the response quantity predicted by the model and \(\bar{R_i}\) is the corresponding experimental data. In this case, \(\mathbf{x}\) would have the meaning of model parameters which are not precisely known and are being calibrated to match available data. This class of problem is known by the terms parameter estimation, system identification, model calibration, test/analysis reconciliation, etc. Nonlinear Least Squares with Dakota In order to specify a least-squares problem, the responses section of the Dakota input should be configured using calibration_terms (as opposed to objective_functions as for optimization). The calibration terms refer to the residuals (differences between the simulation model and the data). Note that Dakota expects the residuals, not the squared residuals, and offers options for instead returning the simulation output to Dakota together with a separate calibration_data file, from which residuals will be calculated. Any linear or nonlinear constraints are handled in an identical way to that of optimization (see Section Optimization Formulations (35) ; note that neither Gauss-Newton nor NLSSOL require any constraint augmentation and NL2SOL supports neither linear nor nonlinear constraints). Gradients of the least-squares terms and nonlinear constraints are required and should be specified using either numerical_gradients, analytic_gradients, or mixed_gradients. Since explicit second derivatives are not used by the least-squares methods, the no_hessians specification should be used. Dakota’s scaling options, described in Section Optimization with User-specified or Automatic Scaling, Listing 53 can be used on least-squares problems, using the calibration_term_scales keyword to scale least-squares residuals, if desired. Solution Techniques Nonlinear least-squares problems can be solved using the Gauss-Newton algorithm, which leverages the full Newton method from OPT++, the NLSSOL algorithm, which is closely related to NPSOL, or the NL2SOL algorithm, which uses a secant-based algorithm. Details for each are provided below. Dakota’s Gauss-Newton algorithm consists of combining an implementation of the Gauss-Newton Hessian approximation (see Section Nonlinear Least Squares Fomulations) with full Newton optimization algorithms from the OPT++ package [MOHW07] (see Section Methods for Constrained Problems). The exact objective function value, exact objective function gradient, and the approximate objective function Hessian are defined from the least squares term values and gradients and are passed to the full-Newton optimizer from the OPT++ software package. As for all of the Newton-based optimization algorithms in OPT++, unconstrained, bound-constrained, and generally-constrained problems are supported. However, for the generally-constrained case, a derivative order mismatch exists in that the nonlinear interior point full Newton algorithm will require second-order information for the nonlinear constraints whereas the Gauss-Newton approximation only requires first order information for the least squares terms. License: LGPL. This approach can be selected using the optpp_g_newton method specification. An example specification follows: max_iterations = 50 convergence_tolerance = 1e-4 output debug Refer to the Dakota Reference Manual Keyword Reference for more detail on the input commands for the Gauss-Newton algorithm. The Gauss-Newton algorithm is gradient-based and is best suited for efficient navigation to a local least-squares solution in the vicinity of the initial point. Global optima in multimodal design spaces may be missed. Gauss-Newton supports bound, linear, and nonlinear constraints. For the nonlinearly-constrained case, constraint Hessians (required for full-Newton nonlinear interior point optimization algorithms) are approximated using quasi-Newton secant updates. Thus, both the objective and constraint Hessians are approximated using first-order information. The NLSSOL algorithm is bundled with NPSOL. It uses an SQP-based approach to solve generally-constrained nonlinear least-squares problems. It periodically employs the Gauss-Newton Hessian approximation to accelerate the search. Like the Gauss-Newton algorithm of Section Gauss-Newton), its derivative order is balanced in that it requires only first-order information for the least-squares terms and nonlinear constraints. License: commercial; see NPSOL Methods for Constrained Problems. This approach can be selected using the nlssol_sqp method specification. An example specification follows: convergence_tolerance = 1e-8 Refer to the Dakota Reference Manual Keyword Reference for more detail on the input commands for NLSSOL. The NL2SOL algorithm [DGW81] is a secant-based least-squares algorithm that is \(q\)-superlinearly convergent. It adaptively chooses between the Gauss-Newton Hessian approximation and this approximation augmented by a correction term from a secant update. NL2SOL tends to be more robust (than conventional Gauss-Newton approaches) for nonlinear functions and “large residual” problems, i.e., least-squares problems for which the residuals do not tend towards zero at the solution. License: publicly available. Additional Features Dakota’s tailored derivative-based least squares solvers (but not general optimization solvers) output confidence intervals on estimated parameters. The reported confidence intervals are univariate (per-parameter), based on local linearization, and will contain the true value of the parameters with 95% confidence. Their calculation essentially follows the exposition in [SW03] and is summarized Denote the variance estimate at the optimal calibrated parameters \(\hat{x}\) by \[\hat{\sigma}^2 = \frac{1}{N_{dof}}\sum_{i=1}^{n} T_i(\hat{x})^2,\] where \(T_i\) are the least squares terms (typically residuals) discussed above and \(N_{dof} = n - p\) denotes the number of degrees of freedom (total residuals \(n\) less the number of calibrated parameters \(p\)). Let \[J = \left[ \frac{\partial T(\hat{x})}{\partial x} \right]\] denote the \(n \times p\) matrix of partial derivatives of the residuals with respect to the calibrated paramters. Then the standard error \(SE_i\) for calibrated parameter \(x_i\) is given by \[SE_i = \hat{\sigma} \sqrt{\left( J^T J \right)^{-1}_{ii} }.\] Using a Student’s t-distribution with \(N_{dof}\) degrees of freedom, the 95% confidence interval for each parameter is given by \[\hat{x}_i \pm t(0.975, N_{dof}) \cdot SE_i.\] In the case where estimated gradients are extremely inaccurate or the model is very nonlinear, the confidence intervals reported are likely inaccurate as well. Further, confidence intervals cannot be calculated when the number of least-squares terms is less than the number of parameters to be estimated, when using vendor numerical gradients, or where there are replicate experiments. See [VSR+07] for more details about confidence intervals, and note that there are alternative approaches such as Bonferroni confidence intervals and joint confidence intervals based on linear approximations or Least squares calibration terms (responses) can be weighted. When observation error variance is provided alongside calibration data, its inverse is applied to yield the typical variance-weighted least squares formulation. Alternately, the calibration_terms weights specification can be used to weight the squared residuals. (Neither set of weights are adjusted during calibration as they would be in iteratively re-weighted least squares.) When response scaling is active, it is applied after error variance weighting and before weights application. The calibration_terms keyword documentation in the Dakota Reference Manual Keyword Reference has more detail about weighting and scaling of the residual terms. Both the Rosenbrock and textbook example problems can be formulated as nonlinear least-squares problems. Refer to Additional Examples for more information on these formulations. Figure Listing 54 shows an excerpt from the output obtained when running NL2SOL on a five-dimensional problem. Note that the optimal parameter estimates are printed, followed by the residual norm and values of the individual residual terms, followed by the confidence intervals on the parameters. <<<<< Iterator nl2sol completed. <<<<< Function evaluation summary: 27 total (26 new, 1 duplicate) <<<<< Best parameters = 3.7541004764e-01 x1 1.9358463401e+00 x2 -1.4646865611e+00 x3 1.2867533504e-02 x4 2.2122702030e-02 x5 <<<<< Best residual norm = 7.3924926090e-03; 0.5 * norm^2 = 2.7324473487e-05 <<<<< Best residual terms = Confidence Interval for x1 is [ 3.7116510206e-01, 3.7965499323e-01 ] Confidence Interval for x2 is [ 1.4845485507e+00, 2.3871441295e+00 ] Confidence Interval for x3 is [ -1.9189348458e+00, -1.0104382765e+00 ] Confidence Interval for x4 is [ 1.1948590669e-02, 1.3786476338e-02 ] Confidence Interval for x5 is [ 2.0289951664e-02, 2.3955452397e-02 ] The analysis driver script (the script being driven by Dakota) has to perform several tasks in the case of parameter estimation using nonlinear least-squares methods. The analysis driver script must: (1) read in the values of the parameters supplied by Dakota; (2) run the computer simulation with these parameter values; (3) retrieve the results from the computer simulation; (4) compute the difference between each computed simulation value and the corresponding experimental or measured value; and (5) write these residuals (differences) to an external file that gets passed back to Dakota. Note there will be one line per residual term, specified with calibration_terms in the Dakota input file. It is the last two steps which are different from most other Dakota applications. To simplify specifying a least squares problem, one may provide Dakota a data file containing experimental results or other calibration data. In the case of scalar calibration terms, this file may be specified with . In this case, Dakota will calculate the residuals (that is, the simulation model results minus the experimental results), and the user-provided script can omit this step: the script can just return the simulation outputs of interest. An example of this can be found in the file named dakota/share/dakota/examples/users/textbook_nls_datafile.in. In this example, there are 3 residual terms. The data file of experimental results associated with this example is textbook_nls_datafile.lsq.dat. These three values are subtracted from the least-squares terms to produce residuals for the nonlinear least-squares problem. Note that the file may be annotated (specified by annotated) or freeform (specified by freeform). The number of experiments in the calibration data file may be specified with , with one row of data per experiment. When multiple experiments are present, the total number of least squares terms will be the number of calibration terms times the number of experiments. Finally, the calibration data file may contain additional information than just the observed experimental responses. If the observed data has measurement error associated with it, this can be specified in columns of such error data after the response data. The type of measurement error is specified by variance_type. For scalar calibration terms, the variance_type can be either none (the user does not specify a measurement variance associated with each calibration term) or scalar (the user specifies one measurement variance per calibration term). For field calibration terms, the variance_type can also be diagonal or matrix. These are explained in more detail in the Reference manual. See the Keyword Reference for more information. Additionally, there is sometimes the need to specify configuration variables. These are often used in Bayesian calibration analysis. These are specified as num_config_variables. If the user specifies a positive number of configuration variables, it is expected that they will occur in the text file before the responses. Usage Guidelines Calibration problems can be transformed to general optimization problems where the objective is some type of aggregated error metric. For example, the objective could be the sum of squared error terms. However, it also could be the mean of the absolute value of the error terms, the maximum difference between the simulation results and observational results, etc. In all of these cases, one can pose the calibration problem as an optimization problem that can be solved by any of Dakota’s optimizers. In this situation, when applying an general optimization solver to a calibration problem, the guidelines in Guide Table 12 still apply. In some cases, it will be better to use a nonlinear least-squares method instead of a general optimizer to determine optimal parameter values which result in simulation responses that “best fit” the observational data. Nonlinear least squares methods exploit the special structure of a sum of the squares objective function. They can be much more efficient than general optimizers. However, these methods require the gradients of the function with respect to the parameters being calibrated. If the model is not able to produce gradients, one can use finite differencing to obtain gradients. However, the gradients must be reasonably accurate for the method to proceed. Note that the nonlinear least-squares methods only operate on a sum of squared errors as the objective. Also, the user must return each residual term separately to Dakota, whereas the user can return an aggregated error measure in the case of general optimizers. The three nonlinear least-squares methods are the Gauss-Newton method in OPT++, NLSSOL, and NL2SOL. Any of these may be tried; they give similar performance on many problems. NL2SOL tends to be more robust than Gauss-Newton, especially for nonlinear functions and large-residual problems where one is not able to drive the residuals to zero at the solution. NLSSOL does require that the user has the NPSOL library. Note that all of these methods are local in the sense that they are gradient-based and depend on an initial starting point. Often they are used in conjunction with a multi-start method, to perform several repetitions of the optimization at different starting points in the parameter space. Another approach is to use a general global optimizer such as a genetic algorithm or DIRECT as mentioned above. This can be much more expensive, however, in terms of the number of function evaluations required.
{"url":"https://snl-dakota.github.io/docs/6.20.0/users/usingdakota/studytypes/nonlinearleastsquares.html","timestamp":"2024-11-01T20:51:07Z","content_type":"text/html","content_length":"40852","record_id":"<urn:uuid:f52f941a-682b-4698-b843-35dcf13729a4>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00836.warc.gz"}
Compare Two Vectors For Equality in R - Data Science Parichay Vectors are used to store one-dimensional data of the same type in R. In this tutorial, we will look at how to compare two vectors for equality in R with the help of some examples. How to check if two vectors are equal in R? You can use the identical() function in R to compare two vectors for equality. Pass the two vectors as arguments to the indentical() function. The following is the syntax – identical(vec1, vec2) It returns TRUE if both vectors contain the same elements in the same positions. That is, the corresponding elements in both vectors are the same. Alternatively, you can use a combination of the all() function and the equality operator to check whether the two vectors are equal or not. The following is the syntax – all(vec1 == vec2) Let’s now look at some examples of using the above methods to check two vectors for equality. Compare two vectors for equality using the identical() function Let’s create two vectors containing the same values at the same positions and check whether they are equal or not using the identical() function. # create two vector vec1 <- c(1, 2, 3, 4) vec2 <- c(1, 2, 3, 4) # check if both vectors are equal print(identical(vec1, vec2)) 📚 Data Science Programs By Skill Level Introductory ⭐ Intermediate ⭐⭐⭐ Advanced ⭐⭐⭐⭐⭐ 🔎 Find Data Science Programs 👨💻 111,889 already enrolled Disclaimer: Data Science Parichay is reader supported. When you purchase a course through a link on this site, we may earn a small commission at no additional cost to you. Earned commissions help support this website and its team of writers. [1] TRUE We get TRUE as the output since both vectors are equal. Let’s now apply the identical() function on two vectors that are not equal. # create two vector vec1 <- c(1, 2, 3, 4) vec2 <- c(1, 3, 3, 4) # check if both vectors are equal print(identical(vec1, vec2)) [1] FALSE We get FALSE as the output. Compare two vectors for equality using the all() function and the == operator You can also use a combination of the R all() function and the equality operator == to check whether two vectors are equal or not. First, use the == operator to compare the two vectors, this will result in a logical vector with TRUE for values that are equal and FALSE for values that are not equal. # create two vector vec1 <- c(1, 2, 3, 4) vec2 <- c(1, 2, 3, 4) # compare vec1 and vec2 print(vec1 == vec2) [1] TRUE TRUE TRUE TRUE If all the values in the resulting logical vector are TRUE then we can say that both the vectors are equal. You can do so using the R all() function which returns TRUE only if all the values in the passed logical vector are TRUE. # check if both vectors are equal print(all(vec1 == vec2)) [1] TRUE Thus, a combination of the equality operator, == and the all() function can tell whether two vectors are equal. Let’s look at another example. # create two vector vec1 <- c(1, 2, 3, 4) vec2 <- c(1, 3, 3, 4) # check if both vectors are equal print(all(vec1 == vec2)) [1] FALSE We get FALSE as the output since the two vectors are not equal. You might also be interested in – Subscribe to our newsletter for more informative guides and tutorials. We do not spam and you can opt out any time.
{"url":"https://datascienceparichay.com/article/r-compare-two-vectors-for-equality/","timestamp":"2024-11-13T18:03:18Z","content_type":"text/html","content_length":"260006","record_id":"<urn:uuid:30bc36e6-4d55-4f07-a5d3-cd6f435a7592>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00078.warc.gz"}
Asymptotic Geometric Analysis, Part II. This book is a continuation of Asymptotic Geometric Analysis, Part I, which was published as volume 202 in this series. Asymptotic geometric analysis studies properties of geometric objects, such as normed spaces, convex bodies, or convex functions, when the dimensions of these objects increase to infinity. The asymptotic approach reveals many very novel phenomena which influence other fields in mathematics, especially where a large data set is of main concern, or a number of parameters which becomes uncontrollably large. One of the important features of this new theory is in developing tools which allow studying high parametric families. Among the topics covered in the book are measure concentration, isoperimetric constants of log-concave measures, thin-shell estimates, stochastic localization, the geometry of Gaussian measures, volume inequalities for convex bodies, local theory of Banach spaces, type and cotype, the Banach-Mazur compactum, symmetrizations, restricted invertibility, and functional versions of geometric notions and inequalities. Original language English Publisher American Mathematical Society Number of pages 686 ISBN (Print) 9781470467777, 9781470463601 State Published - 28 Feb 2022 Publication series Name Mathematical Surveys and Monographs Volume 261 Dive into the research topics of 'Asymptotic Geometric Analysis, Part II.'. Together they form a unique fingerprint.
{"url":"https://cris.iucc.ac.il/en/publications/asymptotic-geometric-analysis-part-ii","timestamp":"2024-11-02T12:12:15Z","content_type":"text/html","content_length":"44403","record_id":"<urn:uuid:2fc3e130-c401-469f-8878-1a6b20eb229b>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00162.warc.gz"}
Weird Kids ALPHA PLUS, 11609 Robinwood Drive, Hagerstown, MD 21742-4476, 301-733-1456, Math Resources K-9, hands-on, practical applications, math mastery emphasized vs shallow spiral teaching approach AMATH, AMATH Systems, Inc., 5355 Tara Hill Dr , Dublin, OH 43017 (800) 442-6284 (orders), Pre-algebra (K-8) review on computer, aimed at older students or adults, web site very informative, sample AUDIO MEMORY PUBLISHING, 501 Cliff Dr, Newport Beach, CA 92663 800-365-SING, Learning tapes: geography, history, etc Backyard Scientist PO Box 16966, Irvine, CA 92713 - Science books BARNUM SOFTWARE, 3450 Lake Shore Ave, Suite 200A, Oakland, CA 94610, 800-553-9155, Quarter Mile math drill software, 7 topic areas, thousands of practice problems each BOXER LEARNING, INC developers of interactive, multimedia math tutorial software 105 West Main St, Charlottesville, VA 22902, 800-736-2824, Algebra, Intermediate Algebra, and Trigonometry, (Grade 6 - Adult), online tutorials and supplemental activities, diagnostic test, free home trial CAN DO VIDEOTAPES, 11511 Pin Oak Drive, Oakdale, CA 95361, 800-533-2653, Exercise and Memorize Math Videos, review math facts and concepts with videos, workbooks, and teacher guides CANON CASSETTES, Box 73, Calumet, OK 73014, 405-893-2239, Math drill tapes (No web site - email only) CHALK DUST CO., 11 Sterling Ct, Sugar Land, TX 77479, 800-588-7564, 281-265-2495, complete secondary math program, primarily via videos, supplemented by texts * Cornell Theory Center Math and Science Gateway CREATIVE TEACHING ASSOCIATES, 800-767-4282, P.O. Box 7766, Fresno, CA 93747, catalog, non-electronic math/science games and workbooks "that transform learning into a meaningful and enriching CTC, 4884 Cloister Dr, Rockville, MD 20852, 800-335-0781, math software, CD w/extra cost teacher's manual, with real world applications, grades 5-12, free demos, "interactive tutoring, conceptual building, and exploration activities to teach, motivate and build confidence" Cuisenaire Math Cuisenaire Co, PO Box 5026, White Plains, NY 10602-5026, (800) 237-3142, Hands-on math and science for grades K-12. *Davidson (maker of Spell It Deluxe, Reading Blaster, Math Blaster, KidWorks Deluxe, Word Blaster and other computer education programs) can be reached at Sales/Customer Service 1-800-545-7677 Davidson & Associates, Inc P.O. Box 2961 Torrence, CA 90509 (This link isn't working, but you can buy Davidson programs from many sources) Developmental Math Mathematics Program Assoc, Box 2118, Halesite, NY 11743, 516-643-9300, builds facts, skills through meaningful practice, 16 levels, teacher guides, diagnostic tests Dittman Research PO Box 202, Lincoln University, PA 19352, new software provider, Cyber Flashcard Workshop is the first offering, allows testing of progress and analysis of student skills The Complete Book of Fingermath Pat Willette (Link not working, I'll continue researching) Gnarly Math SMP Company, Box 1563, Santa Fe, 87504-1563, software for serious math in an interesting style, ages 10-14, algebra, geometry, probability, etc, math lab for exploring math (new) Homeschool Math Very nice site with books to avoid, free worksheets, teaching tips and a nice list of games. Holden Science Has online experiments and offers a lab book for $5-7. Info Math 888-MATH-456, Math Tutor software, others, teaches math the traditional way rather than by playing games Intelligent Tutor 9609 Cypress, Munster, IN 46321, 800-521-4518, Grades 7-12, self teaching, tracking of progress, software to supplement curriculum Interactive Math Academic Systems, 800-694-6858, software (CD) programs for prealgebra to algebra 2, includes animations, video, and real world examples, with assessments, reporting, online demos, well established program Key Curriculum Press 1150 65th Street, Emeryville, CA 94608, 800-995-MATH, math products for upper elementary & high school, "Key to ..." workbook series, supplemental manipulative and software, Kid's Bank Learn about money Mastery Publications 90 Hillside Lane, Arden. NC 28704-9709, 828-684-0429, Mastering Math series (K-6+) Math Essentials 800-431-1579, book to prepare for algebra - master essential math skills Math Forum - runs a newsletter of math info and has archive of newsletters and links Math Teacher's Press 5100 Gamble Dr, Minneapolis, MN 55416-1585, 800-852-2435, Moving with Math, K-8, manipulative based Math U See Steve Demme's multi-sensory approach to math, many local suppliers provide information and handle orders Mathematics Worksheet Factory Suite 120, 9110-A Young Rd S., Chilliwack, British Columbia, Canada, V2P 4R5, computer program to generate customized math practice worksheets Mathpert Systems, 2211 Lawson Lane, Santa Clara, CA 95054, algebra to calculus software learning, online demo, trial version, provides step by step solutions - not just answers, thousands of exercises, teaches strategy to solve problems Mortensen more than MATH, Box 98, Hayden, ID 83835 800-475-8748, hands-on, visual, algebra and more NATHAN has a nice article on "Before Math Begins" Open Court Publishing 220 E. Danieldale, DeSoto, TX 75115, 888-772-4543, division of SRA, various math and reading books Pig out on Math Inst. for Math Mania, Box 910, Montpelier, VT 05601, 800-NUMERAL, 60 page catalog, hands-on focus, [national standards orientation-NCTM] Math Strategies! (game), PixelGraphics, 2459 SE TV Highway, #250, Hillsboro, OR 97123, 800-GAME-345, a lively DOS based arcade style math practice game, multilevels of play and practice sets, free demo on web site, $30 (get a $5 discount if ordered online and delivered by email) Providence Project 14566 NW 110th St, Whitewater, KS 67154, 888-776-8776 "Learning Vitamins", Calculadders math practice, Readywriter, AlphaBetter for language Quaternion Press PO Box 700564, San Antonio, TX 78270, Mathematics for Little Ones now available, ages K-3rd grade, clear, easy to understand, aimed at homeschoolers, encourages early learning Saxon Publishers, Inc 2450 John Saxon Blvd, Norman, OK 73071, 800-284-7019. Math program all grades, plus physics, Placement tests, online exercises Softbasics Box 255, Mill River, MA 01244, 413-229-2191, Math Maker, Math Master, Math User software Tri-Pak, prints 1000's of math activities with answer keys Video Resources Software 11767 South Dixie Hghwy, Miami, FL 33156, 888-ACE-MATH, video instruction
{"url":"http://www.weirdkids.com/educational/math.htm","timestamp":"2024-11-13T20:40:38Z","content_type":"text/html","content_length":"18815","record_id":"<urn:uuid:b04fc8a8-185e-4d94-bab6-b5b6a886adfe>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00653.warc.gz"}
P(a ∩ b) - (Bayesian Statistics) - Vocab, Definition, Explanations | Fiveable P(a ∩ b) from class: Bayesian Statistics The term p(a ∩ b) represents the joint probability of two events, A and B, occurring simultaneously. This concept is crucial in understanding how two events interact with each other and is foundational in the study of joint and conditional probabilities. Joint probability helps in analyzing the relationship between variables and aids in making predictions about outcomes when multiple events are involved. congrats on reading the definition of p(a ∩ b). now let's actually learn it. 5 Must Know Facts For Your Next Test 1. Joint probability can be calculated using the formula: p(a ∩ b) = p(a) * p(b | a) or equivalently p(a ∩ b) = p(b) * p(a | b). 2. If events A and B are independent, then the joint probability simplifies to the product of their individual probabilities: p(a ∩ b) = p(a) * p(b). 3. Joint probabilities are often represented in a Venn diagram where the overlap between two circles indicates the joint occurrence of events A and B. 4. Understanding joint probabilities is essential for constructing Bayesian networks, which model the probabilistic relationships among multiple variables. 5. Joint probabilities can also help identify correlations between events, indicating how the occurrence of one event may influence another. Review Questions • How does joint probability differ from marginal probability, and why is this distinction important? □ Joint probability focuses on the likelihood of two events happening together, expressed as p(a ∩ b), whereas marginal probability looks at the chance of one event occurring independently, like p(A) or p(B). Understanding this difference is important because it allows us to see how events influence each other rather than just considering their individual occurrences. This insight can lead to better decision-making based on the relationships between variables. • Describe how to calculate joint probability when you have conditional probabilities available. □ To calculate joint probability using conditional probabilities, you can use the formula: p(a ∩ b) = p(a) * p(b | a). This means you first find the probability of event A occurring and then multiply it by the probability of event B occurring given that A has already happened. This method highlights how one event's occurrence can directly influence another's likelihood. • Evaluate the implications of joint probability in real-world scenarios, particularly in fields like medicine or finance. □ In real-world scenarios such as medicine, joint probability can assess risks associated with multiple health conditions simultaneously, which helps doctors make informed decisions about patient care. For instance, evaluating the likelihood that a patient has both diabetes and hypertension involves calculating p(diabetes ∩ hypertension). In finance, joint probabilities are crucial for risk management; understanding how different market factors interact allows investors to make smarter investment choices. Analyzing these relationships helps in predicting outcomes more accurately and effectively. © 2024 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
{"url":"https://library.fiveable.me/key-terms/bayesian-statistics/pa-%E2%88%A9-b","timestamp":"2024-11-14T11:50:01Z","content_type":"text/html","content_length":"150246","record_id":"<urn:uuid:d87924b2-8b8c-4c8f-bb52-e7e4b8cdcdcc>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00706.warc.gz"}
Create a routine that, given a set of strings representing directory paths and a single character directory separator, will return a string representi Write a program or a script that returns the last Sundays of each month of a given year. The year may be given through any simple input method in your These are all of the permutations of the symbols A, B, C and D, except for one that's not listed. Find that missing permutation. (cf. Permutations)
{"url":"https://tfetimes.com/tag/find/","timestamp":"2024-11-14T05:10:41Z","content_type":"text/html","content_length":"85023","record_id":"<urn:uuid:57fdbd57-3736-403b-ac2d-334276f1f563>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00632.warc.gz"}
Elastic Scattering previous home next PDF 16. Elastic Scattering Michael Fowler Billiard Balls “Elastic” means no internal energy modes of the scatterer or of the scatteree are excited$—$so total kinetic energy is conserved. As a simple first exercise, think of two billiard balls colliding. The best way to see it is in the center of mass frame of reference. If they’re equal mass, they come in from opposite directions, scatter, then move off in opposite directions. In the early days of particle accelerators (before colliders) a beam of particles was directed at a stationary target. So, the frame in which one particle is initially at rest is called the lab frame. What happens if we shoot one billiard ball at another which is initially at rest? (We’ll ignore possible internal energies, including spinning.) The answer is that they come off at right angles. This follows trivially from conservation of energy and momentum (in an obvious notation) $m\stackrel{\to }{v}=m{\stackrel{\to }{v}}_{1}+m{\stackrel{\to }{v}}_{2},\text{ }\frac{1}{2}m{\stackrel{\to }{v}}_{}^{2}=\frac{1}{2}m{\stackrel{\to }{v}}_{1}^{2}+\frac{1}{2}m{\stackrel{\to }{v}}_{2}^ and Pythagoras’ theorem. Discovery of the Nucleus The first significant use of scattering to learn about the internal structure of matter was Rutherford’s use of $\alpha$ particles directed at gold atoms. This experiment revealed the atomic nucleus for the first time. Our plan here is to analyze this kind of scattering, to understand why it indicated the presence of a nucleus. Similar much later analyses have established that the proton itself has point like constituents, so this is not just of distant historical interest. For $\alpha$ particles on gold atoms, it’s an excellent approximation to take the scatterer as being fixed. This is not an essential requirement, but it simplifies the calculation, and can be corrected for later. To visualize what’s going on, think of the scatterer as a bowling ball with tiny marbles directed towards it, they’re moving fast horizontally, along parallel but random paths. (Let’s take zero gravity here$—$the $\alpha$ particles we’re modeling are moving at about one-twentieth the speed of light!) We observe the rate at which marbles are being scattered in various directions. Call the scattering angle $\chi .$ So, let’s assume the width of the “beam” of marbles is much greater than the size of the bowling ball. We’ll also take the intensity of the beam to be uniform, with $n$ marbles crossing unit area perpendicular to the beam per second. Now, if the bowling ball has radius $R,$ and we ignore the radius of the tiny marbles, the number of marbles that strike the bowling ball and are scattered is clearly $\pi {R}^{2}n$ per second. Not surprisingly, $\pi {R}^{2}$ is called the total cross-section and usually denoted by $\sigma .$ The Differential Cross Section In a real scattering experiment, information about the scatterer can be figured out from the different rates of scattering to different angles. Detectors are placed at various angles $\left(\theta , \varphi \right)$. Of course, a physical detector collects scattered particles over some nonzero solid angle. The usual notation for infinitesimal solid angle is $d\Omega =\mathrm{sin}\theta d\theta d\varphi .$ The full solid angle (all possible scatterings) is $\int d\Omega =4\pi$, the area of a sphere of unit radius. (Note: Landau uses $dο$ for solid angle increment, but $d\Omega$ has become The differential cross section, written $d\sigma /d\Omega ,$ is the fraction of the total number of scattered particles that come out in the solid angle $d\Omega .$, so the rate of particle scattering to this detector is $nd\sigma /d\Omega$, with $n$ the beam intensity as defined above. Now, we’ll assume the potential is spherically symmetric. Imagine a line parallel to the incoming particles going through the center of the atom. For a given ingoing particle, its impact parameter is defined as the distance its ingoing line of flight is from this central line. Landau calls this $\rho ,$ we’ll follow modern usage and call it $b.$ A particle coming in with impact parameter between $b$ and $b+db$ will be scattered through an angle between $\chi$ and $\chi +d\chi ,$ where we’re going to calculate $\chi \left(b\right)$ by solving the equation of motion of a single particle in a repulsive inverse-square force. Note: we’ve switched for this occasion from $\theta$ to $\chi$ for the angle scattered through because we want to save $\theta$ for the $\left(r,\theta \right)$ coordinates describing the complete trajectory, or orbit, of the scattered particle. So, an ingoing cross section $d\sigma =2\pi bdb$ scatters particles into an outgoing spherical area (centered on the scatterer) $2\pi R\mathrm{sin}\chi Rd\chi ,$ that is, a solid angle $d\Omega =2\pi \mathrm{sin}\chi d\chi .$ Therefore the scattering differential cross section $\frac{d\sigma }{d\Omega }=\frac{b\left(\chi \right)}{\mathrm{sin}\chi }\left|\frac{db}{d\chi }\right|.$ (Note that $d\chi /db$ is clearly negative$—$increasing $b$ means increasing distance from the scatterer, so a smaller $\chi$.) Analyzing Inverse-Square Repulsive Scattering: Kepler Again To make further progress, we must calculate $b\left(\chi \right)$, or equivalently $\chi \left(b\right):$ what is the angle of scattering, the angle between the outgoing velocity and the ingoing velocity, for a given impact parameter? $\chi$ will of course also depend on the strength of the repulsion, and the ingoing particle energy. Recall our equation for Kepler orbits: $\frac{{d}^{2}u}{d{\theta }^{2}}+u=\frac{GM{m}^{2}}{{L}^{2}}.$ Let’s now switch from gravitational scattering with an attractive force $GMm/{r}^{2}$ to an electrical repulsive force between two charges ${Z}_{1}e,\text{ }{Z}_{2}e$, force strength $\frac{1}{4\pi {\epsilon }_{0}}\frac{{Z}_{1}{Z}_{2}{e}^{2}}{{r}^{2}}=\frac{k}{{r}^{2}},$ say. Since this is repulsive, the sign will change in the radial acceleration equation, $\frac{{d}^{2}u}{d{\theta }^{2}}+u=-\frac{km}{{L}^{2}}.$ Also, we want the scattering parameterized in terms of the impact parameter $b$ and the incoming speed ${v}_{\infty },$ so putting $L=m{v}_{\infty }b$ this is $\frac{{d}^{2}u}{d{\theta }^{2}}+u=-\frac{k}{m{b}^{2}{v}_{\infty }^{2}}.$ So just as with the Kepler problem, the orbit is given by $\frac{1}{r}=u=-\frac{k}{m{b}^{2}{v}_{\infty }^{2}}+C\mathrm{cos}\left(\theta -{\theta }_{0}\right)=-\kappa +C\mathrm{cos}\left(\theta -{\theta }_{0}\right),$ say. From the lecture on Orbital Mathematics, the polar equation for the left hyperbola branch relative to the external (right) focus is $\mathcal{l}/r=-e\mathrm{cos}\theta -1,$ this is a branch symmetric about the $x$ axis: But we want the incoming branch to be parallel to the axis, which we do by suitable choice of ${\theta }_{0}$. In other words, we rotate the hyperbola clockwise through half the angle between its asymptotes, keeping the scattering center (right-hand focus) fixed. From the lecture on orbital mathematics (last page), the perpendicular distance from the focus to the asymptote is the hyperbola parameter $b$! Presumably, this is why we use $b$ for the impact Hence the particle goes in a hyperbolic path with parameters $e/\mathcal{l}=-C,\text{ }1/\mathcal{l}=\kappa .$ This is not enough information to fix the path uniquely: we’ve only fed in the angular momentum $mb{v}_{\infty },$ not the energy, so this is a family of paths having different impact parameters but the same angular momentum . We can, however, fix the path uniquely by equating the leading order correction to the incoming zeroth order straight path: the particle is coming in parallel to the $x$ axis far away to the left, perpendicular distance $b$ from the axis, that is, from the line $\theta =\pi .$ So, going back to that pre-scattering time, $u\to 0,\text{ }\text{ }\pi -\theta \to b/r=bu,$ and in this small $u$ limit, $u=C\mathrm{cos}\left(\pi -bu-{\theta }_{0}\right)-\kappa \cong C\mathrm{cos}\left(\pi -{\theta }_{0}\right)-\kappa +bCu\mathrm{sin}\left(\pi -{\theta }_{0}\right).$ Matching the zeroth order and the first order terms $C\mathrm{cos}\left(\pi -{\theta }_{0}\right)=\kappa ,\text{ }u=bCu\mathrm{sin}\left(\pi -{\theta }_{0}\right),$ eliminates $C$ and fixes the angle ${\theta }_{0},$ which is the angle the hyperbola had to be rotated through to align the asymptote with the negative $x$ axis, and therefore half the angle between the asymptotes, which would be $\pi$ minus the angle of scattering $\chi$ (see the earlier diagram), $\mathrm{tan}\left(\pi -{\theta }_{0}\right)=-\mathrm{tan}{\theta }_{0}=\frac{1}{b\kappa }=\frac{mb{v}_{\infty }^{2}}{k}.$ $\chi =\pi -2{\mathrm{cot}}^{-1}b\kappa =2{\mathrm{tan}}^{-1}b\kappa ,$ So this is the scattering angle in terms of the impact parameter $b$, that is, in the diagram above $\chi \left(b\right)=2{\mathrm{tan}}^{-1}\left(\frac{k}{mb{v}_{\infty }^{2}}\right).$ $b=\frac{k}{m{v}_{\infty }^{2}}\mathrm{cot}\frac{\chi }{2}$, so $db=\frac{k}{2m{v}_{\infty }^{2}}{\text{cosec}}^{2}\frac{\chi }{2}d\chi ,$ and the incremental cross sectional area $d\sigma =2\pi bdb=\pi {\left(\frac{k}{m{v}_{\infty }^{2}}\right)}^{2}{\text{cosec}}^{2}\frac{1}{2}\chi \mathrm{cot}\frac{1}{2}\chi d\chi =\pi {\left(\frac{k}{m{v}_{\infty }^{2}}\right)}^{2}\frac{\ mathrm{cos}\frac{1}{2}\chi }{{\mathrm{sin}}^{3}\frac{1}{2}\chi }d\chi ={\left(\frac{k}{2m{v}_{\infty }^{2}}\right)}^{2}\frac{1}{{\mathrm{sin}}^{4}\frac{1}{2}\chi }d\Omega .$ . This is Rutherford’s formula: the incremental cross section for scattering into an incremental solid angle, the differential cross section $\frac{d\sigma }{d\Omega }={\left(\frac{k}{2m{v}_{\infty }^{2}}\right)}^{2}\frac{1}{{\mathrm{sin}}^{4}\frac{1}{2}\chi }.$ (Recall $k=\frac{1}{4\pi {\epsilon }_{0}}{Z}_{1}{Z}_{2}{e}^{2}$ in MKS units.) Vectorial Derivation of the Scattering Angle (from Milne) The essential result of the above analysis was the scattering angle as a function of impact parameter, for a given incoming energy. It's worth noting that this can be found more directly by vectorial methods from Hamilton's equation. Recall from the last lecture Hamilton’s equation $\stackrel{\to }{L}×m\stackrel{¨}{\stackrel{\to }{r}}=-m{r}^{2}f\left(r\right)\frac{d\stackrel{^}{\stackrel{\to }{r}}}{dt}$ and the integral for an inverse square force $f\left(r\right)=k/{r}^{2}$ (changing the sign of $\stackrel{\to }{A}$ for later convenience) $\stackrel{\to }{L}×m\stackrel{˙}{\stackrel{\to }{r}}=km\text{ }\stackrel{^}{\stackrel{\to }{r}}+\stackrel{\to }{A}.$ As previously discussed, multiplying by $\stackrel{\to }{L}\cdot$ establishes that $\stackrel{\to }{A}$ is in the plane of the orbit, and multiplying by $\stackrel{\to }{r}\cdot$ gives This corresponds to the equation $\mathcal{l}/r=-e\mathrm{cos}\theta -1$ (the left-hand branch with the right-hand focus as origin, note from diagram above that $\mathrm{cos}\theta$ is negative throughout) and $\frac{{L}^{2}}{kmr}=-1-\frac{A}{km}\mathrm{cos}\theta .$ To find the scattering angle, suppose the unit vector pointing parallel to the asymptote is ${\stackrel{^}{\stackrel{\to }{r}}}_{\infty },$ so the asymptotic velocity is ${v}_{\infty }{\stackrel{^}{\ stackrel{\to }{r}}}_{\infty }.$ Note that as before, $\stackrel{\to }{A}$ is along the major axis (to give the correct form for the $\left(r,\theta \right)$ equation), and $r=\infty$ gives the asymptotic angles from $\mathrm{cos}{\theta }_{r=\infty }=-km/A.$ We’re not rotating the hyperbola as we did in the alternative treatment above: here we keep it symmetric about the $x$ axis, and find its asymptotic angle to that axis, which is one-half the scattering angle. Now take Hamilton’s equation in the asymptotic limit, where the velocity is parallel to the displacement: the vector product of Hamilton’s equation $×\text{ }\text{\hspace{0.17em}}{\stackrel{^}{\stackrel{⇀}{r}}}_{\infty }$ yields $\stackrel{\to }{A}×{\stackrel{^}{\stackrel{⇀}{r}}}_{\infty }=\left(\stackrel{\to }{L}×m{v}_{\infty }{\stackrel{^}{\stackrel{⇀}{r}}}_{\infty }\right)×{\stackrel{^}{\stackrel{⇀}{r}}}_{\infty }=-\ stackrel{\to }{L}\left(L/b\right).$ It follows that $\mathrm{sin}{\theta }_{r=\infty }=-{L}^{2}/Ab,$ And together with $\mathrm{cos}{\theta }_{r=\infty }=-km/A,$ we find $\mathrm{tan}{\theta }_{r=\infty }=\frac{{L}^{2}}{kmb}=\frac{mb{v}_{\infty }^{2}}{k}.$ This is the angle between the asymptote and the major axis: the scattering angle $\chi =\pi -2{\theta }_{r=\infty }=2\left(\frac{\pi }{2}-{\theta }_{r=\infty }\right)=2{\mathrm{tan}}^{-1}\left(\frac{k}{mb{v}_{\infty }^{2}}\right),$ agreeing with the previous result. previous home next PDF
{"url":"https://galileoandeinstein.phys.virginia.edu/7010/CM_16_Elastic_Scattering.html","timestamp":"2024-11-03T19:48:41Z","content_type":"text/html","content_length":"46355","record_id":"<urn:uuid:c1ffa860-f3e6-453b-8156-f5de43fd3a4c>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00876.warc.gz"}
Cite as Ioana O. Bercea, Martin Groß, Samir Khuller, Aounon Kumar, Clemens Rösner, Daniel R. Schmidt, and Melanie Schmidt. On the Cost of Essentially Fair Clusterings. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 145, pp. 18:1-18:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019) Copy BibTex To Clipboard author = {Bercea, Ioana O. and Gro{\ss}, Martin and Khuller, Samir and Kumar, Aounon and R\"{o}sner, Clemens and Schmidt, Daniel R. and Schmidt, Melanie}, title = {{On the Cost of Essentially Fair Clusterings}}, booktitle = {Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019)}, pages = {18:1--18:22}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-125-2}, ISSN = {1868-8969}, year = {2019}, volume = {145}, editor = {Achlioptas, Dimitris and V\'{e}gh, L\'{a}szl\'{o} A.}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.APPROX-RANDOM.2019.18}, URN = {urn:nbn:de:0030-drops-112337}, doi = {10.4230/LIPIcs.APPROX-RANDOM.2019.18}, annote = {Keywords: approximation, clustering, fairness, LP rounding}
{"url":"https://drops.dagstuhl.de/search/documents?author=Bercea,%20Ioana%20O.","timestamp":"2024-11-05T16:41:54Z","content_type":"text/html","content_length":"73269","record_id":"<urn:uuid:89f2a51b-b34e-46e6-91f2-88d2db55e5f8>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00423.warc.gz"}
smartZebra | Raw beta vs. adjusted beta: Choosing the right beta factors in the CAPM Significance of the beta factors in the CAPM In the context of the Capital Asset Pricing Model (CAPM), beta represents the risk measure for systematic risk. This risk is the portion of the fluctuation in an equity return that cannot be eliminated even within a fully diversified equity portfolio and must therefore be borne by the investor. The beta factors as an expected value Risk-averse investors expect to be compensated for this part of the risk in the form of a risk premium. The price of risk is known as the equity risk premium, and the amount of risk is measured by the beta factors. The price of risk multiplied by the amount of risk gives the total risk premium of the equity return. The parameters in the CAPM are forward-looking expected values. This applies in particular to the expected covariances of equity returns, which are decisive for the formation of a fully diversified equity portfolio. The beta factors as a price-determining element of the expected return on equities is also an expected value. Traditionally, the beta factors are determined empirically on the basis of historical share returns. Mean reversion property of empirically measured beta factors In valuation practice, the beta factors are determined using historical capital market data. This is not a problem from a methodological point of view, but raises the question of the extent to which historical data can be a good indicator of the future risk profile of an investment. Empirically determined betas statistically show a so-called mean reversion property towards a value of 1. Historical betas greater than 1 tend to fall, while betas less than 1 tend to rise. Blume adjustment as a good compromise for valuation practice The mean reversion property indicates that historical betas are only suitable for forward-looking business valuations to a limited extent. However, this problem can be minimized by using the adjusted beta instead of the raw beta. The so-called Blume adjustment (M. Blume 1971) is the most frequently used adjustment. The temporal instability and return tendency of the beta factors towards 1.0 is approximated in the Blume adjustment by the following equation: adjusted beta = α0 +α1 * raw beta with α0 = 1/3 and α1 = 2/3 The adjusted beta is therefore determined here by a mean reversion process in which the historically measured beta is incorporated with a coefficient of 2/3. Although there are also more complex adjustment algorithms (e.g. O. Vasicek, 1973), the relatively easy-to-use and sufficiently valid Blume adjustment has become established in valuation practice. Wrapping it up The choice between raw beta and adjusted beta is crucial for risk assessment in the CAPM. While raw beta is based on historical data, adjusted beta takes into account the tendency towards mean reversion and thus provides a more reliable estimate for future risks. The Blume adjustment is a practical method to take this tendency into account and is therefore frequently used in valuation practice. SmartZebra's tools and expertise support the accurate determination and application of these beta factors efficiently and reliably.
{"url":"https://www.smart-zebra.de/post/raw-beta-vs-adjusted-beta-choosing-the-right-beta-factors-in-the-capm","timestamp":"2024-11-04T17:27:19Z","content_type":"text/html","content_length":"51800","record_id":"<urn:uuid:78727250-b00d-4158-af5b-a03f2c0f9924>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00873.warc.gz"}
Your Trusted A Math & E Math Tutor in Singapore ASM offers weekly A math and E math tuition for secondary school children in order to ACE their GCE O Level examination Secondary School Math Tuition During upper secondary school (in secondary 3 and 4), Singapore students have the option of taking Additional Mathematics (aka A Math) as one of their GCE O levels subjects. The subject has a wide variety of problems and topics as compared to the E Math (Mathematics) curriculum. Opportunities to think critically about mathematics are provided to students during Additional Mathematics. You will not only apply formulas; you’ll need to understand and problem solve how the formulas work and their permutations to solve more challenging mathematical problems. The Ministry of Education updates the syllabus of both subject regularly Therefore, if students require additional help, it is imperative seek out experienced tutors who understand the mathematical concepts as well as syllabus to help your child excel in their exams. Our Approach for Math Tuition Systematic Mathematical Method Our tutors use tried and tested approaches during their E and A maths tuition sessions. We know that parents want tangible results from tuition. We cannot guarantee results but, if students follow our curriculum and homework schedule, we are sure that they will see vast improvements. Tutor’s Qualifications And Experience Our teachers have years of experience helping secondary school students in the additional mathematics and math subjects Our E and A mathematics tutors are constantly reviewing and updating their lesson plans to fit the newest Singapore GCE O Levels syllabus FAQs for our Secondary School E & A Math Tuition Is Additional Mathematics easy? At the ‘O’ Level, Additional Mathematics is a challenging course. Students sometimes perform well on in-class tests but struggle to finish their papers on time or have mental blockages while taking high-stakes exams like the preliminary exams or the ‘O’ Level exam. In general, the Ministry of Education also set Singapore’s GCE O Level syllabus is also more challenging compared to other Is A Math harder than E Maths? Elementary and additional math has some significant differences (A-Maths). In contrast to A-Maths, regular math is complex but easy to grasp. E-Maths solves the same problem using simultaneous equations. Linear simultaneous equations provide a solution. A-Maths students must know their algebra well because it is a continuation of their E-Math curriculum. The answers are more complicated, so prepare carefully to avoid making mistakes. E-Math is essential, and A-Math is advanced. It’s more complicated because you require more core knowledge. The complexity of E-Maths may cause certain pupils more significant problems than A-Maths because E-Maths contain multiple solutions to the same problem. A-Math is systematic. How much does Math tuition in Singapore cost? We offer 3 different kinds of tutoring for both E Math and A Math lessons: online, group classes and one-on-one classes. Each class will have a differing price point, we hope to offer affordable options for students of all backgrounds. Group Online Lessons: $50 Per hour Group physical Lessons: $65 per hour One-0n-one Lessons: $80 per hour How Experienced are Your Tutors All our tutors undergo a rigorous vetting and training program before they are allowed to teach. All our teachers are well versed in the concepts and syllabus of Additional Mathematics and E Math. Our teachers have a combined experience of over 30 years teaching Singapore students in lower secondary and upper secondary. Head tutors were previously MOE (Ministry of Education) train teachers as well. So your child is in good hands! Is Additional Math Important? Students in Singapore’s secondary schools with a solid mathematical background may study Additional Mathematics during secondary 3. This advanced course covers algebra binomial expansion, plane geometry proofs, differentiation, and integration. Additional Mathematics is required for H2 Mathematics and H2 Further Mathematics (for students entering Junior College). Students who didn’t take Additional Math at the ‘O’ level will likely take H1 Math in JC. Every student follows the same curriculum from secondary 1 through secondary 2. The sec 1-2 Standard Mathematics curriculum builds upon the what students have learnt in their primary school, while the sec 3 & 4. The new concepts and skills taught in Foundation Mathematics are just a small part of what is taught in the more comprehensive Standard Mathematics course. The material covered in O-Level Mathematics is expanded upon in the coursework for Standard Mathematics. The concepts included in the N(A)-Level 1 Mathematics syllabus are similar to those covered in the Standard Mathematics syllabus. However, this syllabus is a subset of the O-Level Mathematics curriculum. The mathematics curriculum at N(T)-Level 2 is an expansion of the mathematics curriculum at the Foundation Level. The O-Level Additional Mathematics curriculum builds on the material covered in the O-Level Mathematics course and provides a deeper dive into some of the most important concepts. Math at the N(A) Level is a selection from Math at the O(A) Level. O-Level Mathematics and O-Level Additional Mathematics give you the knowledge you need to be ready for H2 Mathematics at the pre-university level. Mathematics is not required before college. The mathematics curriculum for H1 is an extension of the mathematics curriculum for the O-level. Some material from O-Level Additional Mathematics is assumed for the H2 Mathematics course. There is a natural progression from H2 mathematics into H3 mathematics. Students may need to prepare for and excel in additional mathematics to take math at the H1 or H2 level in junior college. This is because both H1 and H2 math courses assume knowledge of other areas of mathematics. Students interested in the social sciences, accountancy, business, or economics will find H1 mathematics an excellent stepping stone. If you want to study these subjects in college, you’ll need at least a B in mathematics from the H1 level or an A in math from the O level. On the other hand, H2 math offers enough preparation for university-level physics, mathematics, and engineering. In addition, having a solid understanding of the additional mathematics and having done well in the subject might be beneficial for a kid who hasn’t decided what they want to study in What Topics are Covered for E Math? Here are the topics covered in the GCE O level Singapore Mathematics curriculum • NUMBER AND ALGEBRA □ Numbers and their operations □ Ratio and proportion □ Percentage □ Rate and speed □ Algebraic expressions and formulae □ Functions and graphs □ Equations and inequalities □ Set language and notation □ Matrices □ Problems in real world contexts • GEOMETRY AND MEASUREMENT □ Angles, triangles and polygons □ Congruence and similarity □ Properties of circles □ Pythagoras’ theorem and trigonometry □ Mensuration □ Coordinate geometry □ Vectors in two dimensions □ Problems in real world contexts □ Data analysis □ Probability What Topics are Covered for Additional Mathematics? Here are the topics covered in the Singapore GCE Additional Mathematics syllabus: • ALGEBRA □ Quadratic functions □ Equations and inequalities □ Surds □ Polynomials and partial fractions □ Binomial expansions □ Exponential and logarithmic functions • GEOMETRY AND TRIGONOMETRY □ Trigonometric functions, identities and equations □ Coordinate geometry in two dimensions □ Proofs in plane geometry • CALCULUS □ Differentiation and integration Book a Trial Math Lesson! We offer trial lessons to see if your child enjoys our tuition lessons. Please fill up the form below. Student and Parent’s Feedback on our math lessons Seth has help me a lot for my A math subject. I highly recommend his tuition lessons! “Dramatic improvement in math exam results” My son had struggled with his math exams. But with the help of ASM’s math tuition he’s results improved a lot. Tuition Center 491 Jurong West Ave 1, #02-153, Singapore 640491 Monday: 11am – 9pm Tuesday: 11am – 9pm Wednesday: Closed Thursday: 11am – 9pm Friday: 11am – 11pm Saturday: 11am – 11pm Sunday: Closed
{"url":"https://www.afterschoolmath.org/","timestamp":"2024-11-12T00:26:38Z","content_type":"text/html","content_length":"137571","record_id":"<urn:uuid:d082f3e0-0d80-47c1-a4bb-4fb6350bd1dc>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00428.warc.gz"}
CS 5302 Data Structures and Algorithm Tutorial 5 Tree Written Exercise Give an O(n) algorithm for computing the depth of all the nodes of a tree T, where n is the number of nodes of T. For a tree T, let NI denote the number of internal nodes, and let NE denote the number of its external nodes. Show that if every internal node in T has exactly 3 children, then NE = 2NI + 1. Programming Exercise Design and implement two ADT for binary tree respectively with (a) linked structure, and (b) array based representation. Your ADT should support the methods listed below. Some source codes can be found in pages 283-287. Analyze the running time of each method. Create and return a new node r storing element e and make r the root. An error occurs if the tree is not empty. Create and return a new node w storing element e, and w as the left child of v and return w. An error occurs if v already has a left child. Create and return a new node z storing element e, add z as the right child of v and return z. An error occurs if v already has a right child Perform preorder traversal of the binary tree.
{"url":"https://studylib.net/doc/5899921/t5-bak","timestamp":"2024-11-11T10:38:30Z","content_type":"text/html","content_length":"59590","record_id":"<urn:uuid:43514a82-dfeb-4d01-91f7-fca9773d8d78>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00137.warc.gz"}
We continue to live through incredibly turbulent times. In the past decade or so we have experienced a global financial crisis, a global health emergency, seen the UK’s departure from the European Union, and witnessed increasing levels of geopolitical tension and conflict. Add to this the effects from the climate emergency and it easy to see why the issue of economic uncertainty is so important when thinking about a country’s economic prospects. In this blog we consider how we can capture this uncertainty through a World Uncertainty Index and the ways by which economic uncertainty impacts on the macroeconomic environment. World Uncertainty Index Hites Ahir, Nicholas Bloom and Davide Furceri have constructed a measure of uncertainty known as the World Uncertainty Index (WUI). This tracks uncertainty around the world using the process of ‘text mining’ the country reports produced by the Economist Intelligence Unit. The words searched for are ‘uncertain’, ‘uncertainty’ and ‘uncertainties’ and a tally is recorded based on the number of times they occur per 1000 words of text. To produce the index this figure is then multiplied up by 100 000. A higher number therefore indicates a greater level of uncertainty. For more information on the construction of the index see the 2022 article by Ahir, Bloom and Furceri linked below. here for a PowerPoint) shows the WUI both globally and in the UK quarterly since 1991. The global index covers 143 countries and is presented as both a simple average and a GDP weighted average. The UK WUI is also shown. This is a three-quarter weighted average, the authors’ preferred measure for individual countries, where increasing weights of 0.1, 0.3 and 0.6 are used for the three most recent quarters. From Figure 1 we can see how the level of uncertainty has been particularly volatile over the past decade or more. Events such as the sovereign debt crisis in parts of Europe in the early 2010s, the Brexit referendum in 2016, the COVID-pandemic in 2020–21 and the invasion of Ukraine in 2022 all played their part in affecting uncertainty domestically and internationally. Uncertainty, risk-aversion and aggregate demand Now the question turns to how uncertainty affects economies. One way of addressing this is to think about ways in which uncertainty affects the choices that people and businesses make. In doing so, we could think about the impact of uncertainty on components of aggregate demand, such as household consumption and investment, or capital expenditures by firms. here for a PowerPoint), investment is particularly volatile, and much more so than household spending. Some of this can be attributed to the ‘lumpiness’ of investment decisions since these expenditures tend to be characterised by indivisibility and irreversibility. This means that they are often relatively costly to finance and are ‘all or nothing’ decisions. In the context of uncertainty, it can make sense therefore for firms to wait for news that makes the future clearer. In this sense, we can think of uncertainty rather like a fog that firms are peering through. The thicker the fog, the more uncertain the future and the more cautious firms are likely to be. The greater caution that many firms are likely to adopt in more uncertain times is consistent with the property of risk-aversion that we often attribute to a range of economic agents. When applied to household spending decisions, risk-aversion is often used to explain why households are willing to hold a buffer stock of savings to self-insure against unforeseen events and their future financial outcomes being worse than expected. Hence, in more uncertain times households are likely to want to increase this buffer further. Risk aversion is consistent with the property of diminishing marginal utility of income or consumption. In other words, as people’s total spending volumes increase, their levels of utility or satisfaction increase but at an increasingly slower rate. It is this which explains why individuals are willing to engage with the financial system to reallocate their expected life-time earnings and have a smoother consumption profile than would otherwise be the case from their fluctuating incomes. Yet diminishing marginal utility not only explains consumption smoothing, but also why people are willing to engage with the financial system to have financial buffers as self-insurance. It explains why people save more or borrow less today than suggested by our base-line consumption smoothing model. It is the result of people’s greater dislike (and loss of utility) from their financial affairs being worse than expected than their like (and additional utility) from them being better than expected. This tendency is only likely to increase the more uncertain times are. The result is that uncertainty tends to lower household consumption with perhaps ‘big-ticket items’, such as cars, furniture, and expensive electronic goods, being particularly sensitive to uncertainty. Uncertainty and confidence Uncertainty does not just affect risk; it also affects confidence. Risk and confidence are often considered together, not least because their effects in generating and transmitting shocks can be difficult to disentangle. We can think of confidence as capturing our mood or sentiment, particularly with respect to future economic developments. Figure 3 plots the Uncertainty Index for the UK alongside the OECD’s composite consumer and business confidence indicators. Values above 100 for the confidence indicators indicate greater confidence about the future economic situation and near-term business environment, while values below 100 indicate pessimism towards the future economic and business environments. here for a PowerPoint). Haddow, Hare, Hooley and Shakir (see link below) argue that the evidence tends to point to changes in uncertainty affecting confidence, but with less evidence that changes in confidence affect uncertainty. To illustrate this, consider the global financial crisis of the late 2000s. The argument can be made that the heightened uncertainty about future prospects for households and businesses helped to erode their confidence in the future. The result was that people and businesses revised down their expectations of the future (pessimism). However, although people were more pessimistic about the future, this was more likely to have been the result of uncertainty rather than the cause of further uncertainty. For economists and policymakers alike, indicators of uncertainty, such as the Ahir, Bloom and Furceri World Uncertainty Index, are invaluable tools in understanding and forecasting behaviour and the likely economic outcomes that follow. Some uncertainty is inevitable, but the persistence of greater uncertainty since the global financial crisis of the late 2000s compares quite starkly with the relatively lower and more stable levels of uncertainty seen from the mid-1990s up to the crisis. Hence the recent frequency and size of changes in uncertainty show how important it to understand how uncertainty effects transmit through economies. Academic papers National Bureau of Economic Research, Working Paper 29763, Hites Ahir, Nicholas Bloom and Davide Furceri (February 2022) Brookings Papers on Economic Activity, Christopher D Carroll (Vol 2, 1992) Bank of England Quarterly Bulletin, 2013 Q2, Abigail Haddow, Chris Hare, John Hooley and Tamarah Shakir (13/6/13) Economics Observatory, Ahmet Kaya (1/3/24) IMF Blog, Mario Catalán, Andrea Deghi and Mahvash S Qureshi (15/10/24) Reuters, Jonathan Cable and Leika Kihara (1/10/24) BBC News, Faisal Islam and Michael Race (20/9/24) The Guardian, Phillip Inman and Graeme Wearden (23/9/24) FashionUnited, Don-Alvin Adegeest (6/8/24) 1. (a) Explain what is meant by the concept of diminishing marginal utility of consumption. (b) Explain how this concept helps us to understand both consumption smoothing and the motivation to engage in buffer-stock saving. 2. Explain the distinction between confidence and uncertainty when analysing macroeconomic shocks. 3. Discuss which types of expenditures you think are likely to be most susceptible to uncertainty shocks. 4. Discuss how economic uncertainty might affect productivity and the growth of potential output. 5. How might the interconnectedness of economies affect the transmission of uncertainty effects through economies? 1. What factors drive the currency carry trade? 2. Is the carry trade a form of arbitrage? 3. Find out and explain what has happened to the Japanese yen since this blog was written. 4. Find out and explain some other examples of carry trades. 5. Why are expectations so important in determining the extent and timing of the unwinding of carry trades? 1. What is meant by each of the following terms: (a) net borrowing; (b) primary deficit; (c) net debt? 2. Explain how the following affect the path of the public-sector debt-to-GDP ratio: (a) interest rates; (b) economic growth; (c) the existing debt-to-GDP ratio. 3. Which factors during the 2010s were affecting the fiscal arithmetic of public debt positively, and which negatively? 4. Discuss the prospects for the fiscal arithmetic of public debt in the coming years. 5. Assume that a country has an existing public-sector debt-to-GDP ratio of 60 percent. (a) Using the ‘rule of thumb’ for public debt dynamics, calculate the approximate primary balance it would need to run in the coming year if the expected average real interest rate on the debt were 3 per cent and real economic growth were 2 per cent? (b) Repeat (a) but now assume that real economic growth is expected to be 4 per cent. (c) Repeat (a) but now assume that the existing public-sector debt-to-GDP ratio is 120 per cent. (d) Using your results from (a) to (c) discuss the factors that affect the fiscal arithmetic of the growth of public-sector debt. 1. Explain what is meant by the following fiscal terms: (a) structural deficit; (b) automatic stabilisers; (c) discretionary fiscal policy; (d) primary deficit. 2. What is the difference between current and capital public expenditures? Give some examples of each. 3. Consider the following two examples of public expenditure: grants from government paid to the private sector for the installation of energy-efficient boilers, and welfare payments to unemployed people. How are these expenditures classified in the public finances and what fiscal objectives do you think they meet? 4. Which of the following statements about the primary balance is FALSE? (a) In the presence of debt interest payments a primary deficit will be smaller than a budget deficit. (b) In the presence of debt interest payments a primary surplus will be smaller than a budget surplus. (c) The primary balance differs from the budget balance by the size of debt interest payments. (d) None of the above. 5. Explain the difference between a fiscal impulse and a fiscal multiplier. 6. Why is low economic growth likely to affect the sustainability of the public finances? What other factors could also matter? 1. Explain the law of comparative advantage and demonstrate how trade between two countries can lead to both countries gaining. 2. What are the main economic problems arising from globalisation? 3. Is the answer to the problems of globalisation to move towards greater autarky? 4. Would the expansion/further integration of trading blocs be a means of exploiting the benefits of globalisation while reducing the risks? 5. Is the role of the US dollar likely to decline over time and, if so, why? 6. Summarise Karl Polanyi’s arguments in The Great Transformation (see the Daniel W. Drezner article linked below). How well do they apply to the current world situation? A common practice of international investors is to take part in the so-called ‘carry trade’. This involves taking advantage of nominal interest rate differences between countries. For example, assume that interest rates are low in Japan and high in the USA. It is thus profitable to borrow yen in Japan at the low interest rate, exchange it into US dollars and deposit the money at the higher interest rate available in the USA. If there is no change in the exchange rate between the dollar and the yen, the investor makes a profit equal to the difference in the interest rates. Rather than depositing the money in a US bank account, an alternative is to purchase US bonds or other assets in the USA, where the return is again higher than that in Japan. If, however, interest-rate differentials narrow, there is the possibility of the carry trade ‘unwinding’. Not only may the carry trade prove unprofitable (or less so), but investors may withdraw their deposits and pay back the loans. This, as we shall, can have adverse consequences on exchange rates. The problem of an unwinding of the carry trade is not new. It worsened the underlying problems of the financial crisis in 2008. The question today is whether history is about to repeat itself with a new round of unwinding of the carry trade threatening economic growth and recovery around the world. We start by looking at what happened in 2008. The carry trade and the 2008 financial crisis here for a PowerPoint). The carry trade saw investors borrowing money in Japan and Switzerland, exchanging it on the foreign exchange market, with the currency then deposited in the UK, USA and Australia. Hundreds of billions worth of dollars were involved in this carry trade. If, however, the higher interest rates in the UK and other deficit countries were simply to compensate investors for the risk of currency depreciation, then there would be no excessive inflow of finance. The benefit of the higher interest rate would be offset by a depreciating currency. But the carry trade had the effect of making deficit currencies appreciate, thereby further boosting the carry trade by speculation of further exchange rate rises. Thus the currencies of deficit countries appreciated, making their goods less competitive and worsening their current account deficit. Between 1996 and 2006, the average current account deficits as a percentage of GDP for Australia, the USA and the UK were close to 4½, 4 and 2, respectively. Between January 1996 and December 2006, the broad-based real exchange rate index of the Australian dollar appreciated by 17%, of the US dollar by 4% and of sterling by some 23%. With the credit crunch of 2007/8, the carry trade unwound. Much of the money deposited in the USA had been in highly risky assets, such as sub-prime mortgages. Investors scrambled to sell their assets in the USA, UK and the EU. Loans from Japan and Switzerland were repaid and these countries, seen as ‘safe havens’, attracted deposits. The currencies of deficit countries, such as the UK and USA, began to depreciate and those of surplus countries, such as Japan and Switzerland, began to appreciate. Between September 2007 and September 2008, the real exchange rate indices of the US dollar and sterling depreciated by 2% and 13% respectively; the yen and the Swiss franc appreciated by 3% and 2¾%. This represented a ‘double whammy’ for Japanese exporters. Not only did its currency appreciate, making its exports more expensive in dollars, euros, pounds, etc., but the global recession saw consumers around the world buying less. As a result, the Japanese economy suffered the worst recession of the G7 economies. The carry trade in recent months Since 2016, there has been a re-emergence of the carry trade as the Fed began raising interest rates while the Bank of Japan kept rates at the ultra low level of –0.1% (see Figure 1). The process slowed down when the USA lowered interest rates in 2020 in response to the pandemic and fears of recession. But when the USA, the EU and the UK began raising rates at the beginning of 2022 in response to global inflationary pressures, while Japan kept its main rate at –0.1%, so the carry trade resumed in earnest. Cross-border loans originating in Japan (not all of it from the carry trade) had risen to ¥157tn ($1tn) by March 2024 – a rise of 21% from 2021. here for a PowerPoint). Although this depreciation of the yen helped Japanese exports, it also led to rising prices. Japanese inflation rose steadily throughout 2022. In the 12 months to January 2022 the inflation rate was 0.5% (having been negative from October 2020 to August 2021). By January 2023, the annual rate had risen to 4.3% – a rate not seen since 1981. The Bank of Japan was cautious about raising interest rates to suppress this inflation, however, for fear of damaging growth and causing the exchange rate to appreciate and thereby damaging exports. Indeed, quarterly economic growth fell from 1.3% in 2023 Q1 to –1.0% in 2023 Q3. But then, with growth rebounding and the yen depreciating further, in March 2024 the Bank of Japan decided to raise its key rate from –0.1% to 0.1%. This initially had the effect of stabilising the exchange rate. But then with the yen depreciating further and inflation rising from 2.5% to 2.8% in May and staying at this level in June, the Bank of Japan increased the key rate again at the end of July – this time to 0.25% – and there were expectations that there would be another rise before the end of the year. At the same time, there were expectations that the Fed would soon lower its main rate (the Federal Funds Rate) from its level of 5.33%. The ECB and the Bank of England had already begun lowering their main rates in response to lower inflation. The carry trade rapidly unwound. Investors sold US, EU and UK assets and began repaying yen loans. here for a PowerPoint). Between 31 July (the date the Bank of Japan raised interest rates the second time) and 5 August, the dollar depreciated against the yen from ¥150.4 to ¥142.7. In other words, the value of 100 yen appreciated from $0.66 to $0.70 – an appreciation of the yen of 6.1%. Fears about the unwinding of the carry trade led to falls in stock markets around the world. Not only were investors selling shares to pay back the loans, but fears of the continuing process put further downward pressure on shares. From 31 July to 5 August, the US S&P 500 fell by 6.1% and the tech-heavy Nasdaq by 8.0%. here for a PowerPoint). Although the yen has since depreciated slightly (a rise in the yen/dollar rate) and stock markets have recovered somewhat, expectations of many investors are that the unwinding of the yen carry trade has some way to go. This could result in a further appreciation of the yen from current levels of around ¥100 = $0.67 to around $0.86 in a couple of years’ time. There are also fears about the carry trade in the Chinese currency, the yuan. Some $500 billion of foreign currency holdings have been acquired with yuan since 2022. As with the Japanese carry trade, this has been encouraged by low Chinese interest rates and a depreciating yuan. Not only are Chinese companies investing abroad, but foreign companies operating in China have been using their yuan earnings from their Chinese operations to invest abroad rather than in China. The Chinese carry trade, however, has been restricted by the limited convertibility of the yuan. If the Chinese carry trade begins to unwind when the Chinese economy begins to recover and interest rates begin to rise, the effect will probably be more limited than with the yen. CNN, Allison Morrow (7/8/24) CNBC, Sam Meredith (13/8/24) Financial Times, Leo Lewis and David Keohane (7/8/24) Financial Times, John Plender (10/8/24) Forbes, Frank Holmes (12/8/24) Alt21 (26/1/23) The Conversation, Charles Read (9/8/24) Reuters, Winni Zhou and Summer Zhen (13/8/24) Yahoo Finance/Bloomberg, David Finnerty and Ruth Carson (16/8/24) Investopedia, Kathy Lien (9/8/24) Finimize, Stéphane Renevier (13/8/24) The past decade or so has seen large-scale economic turbulence. As we saw in the blog Fiscal impulses, governments have responded with large fiscal interventions. The COVID-19 pandemic, for example, led to a positive fiscal impulse in the UK in 2020, as measured by the change in the structural primary balance, of over 12 per cent of national income. The scale of these interventions has led to a significant increase in the public-sector debt-to-GDP ratio in many countries. The recent interest rates hikes arising from central banks responding to inflationary pressures have put additional pressure on the financial well-being of governments, not least on the financing of their debt. Here we discuss these pressures in the context of the ‘r – g’ rule of sustainable public debt. Public-sector debt and borrowing here for a PowerPoint of Chart 1) Chart 1 shows the impact of the fiscal interventions associated with the global financial crisis and the COVID-19 pandemic, when net borrowing rose to 10 per cent and 15 per cent of GDP respectively. The former contributed to the debt-to-GDP ratio rising from 35.6 per cent in 2007/8 to 81.6 per cent in 2014/15, while the pandemic and subsequent cost-of-living interventions contributed to the ratio rising from 85.2 per cent in 2019/20 to around 98 per cent in 2023/24. Sustainability of the public finances The analysis therefore implies that the sustainability of public-sector debt is dependent on at least three factors: existing debt levels, the implied average interest rate facing the public sector on its debts, and the rate of economic growth. These three factors turn out to underpin a well-known rule relating to the fiscal arithmetic of public-sector debt. The rule is sometimes known as the ‘ r – g’ rule (i.e. the interest rate minus the growth rate). Underpinning the fiscal arithmetic that determines the path of public-sector debt is the concept of the ‘primary balance’. This is the difference between the sector’s receipts and its expenditures less its debt interest payments. A primary surplus (a positive primary balance) means that receipts exceed expenditures less debt interest payments, whereas a primary deficit (a negative primary balance) means that receipts fall short. The fiscal arithmetic necessary to prevent the debt-to-GDP ratio rising produces the following stable debt equation or ‘r – g’ rule: On the left-hand side of the stable debt equation is the required primary surplus (PS) to GDP (Y) ratio. Moving to the right-hand side, the first term is the existing debt-to-GDP ratio (D/Y). The second term ‘r – g’, is the differential between the average implied interest rate the government pays on its debt and the growth rate of the economy. These terms can be expressed in either nominal or real terms as this does not affect the differential. To illustrate the rule consider a country whose existing debt-to-GDP ratio is 1 (i.e. 100 per cent) and the ‘r – g’ differential is 0.02 (2 percentage points). In this scenario they would need to run a primary surplus to GDP ratio of 0.02 (i.e. 2 percent of GDP). The ‘r – g‘ differential The ‘r – g’ differential reflects macroeconomic and financial conditions. The fiscal arithmetic shows that these are important for the dynamics of public-sector debt. The fiscal arithmetic is straightforward when r = g as any primary deficit will cause the debt-to-GDP ratio to rise, while a primary surplus will cause the ratio to fall. The larger is g relative to r the more favourable are the conditions for the path of debt. Importantly, if the differential is negative (r < g), it is possible for the public sector to run a primary deficit, up to the amount that the stable debt equation permits. r – g’ differential has affected debt sustainability in the UK since 1990. Chart 2 plots the implied yield on 10-year government bonds, alongside the annual rate of nominal growth (click here for a PowerPoint). As John explains in his blog The bond roller coaster, the yield is calculated as the coupon rate that would have to be paid for the market price of a bond to equal its face value. Over the period, the average annual nominal growth rate was 4.5 per cent, while the implied interest rate was almost identical at 4.6 per cent. The average annual rate of CPI inflation over this period was 2.8 per cent. r – g’ differential which is simply the difference between the two series in Chart 2, along with a 12-month rolling average of the differential to help show better the direction of the differential by smoothing out some of the short-term volatility (click here for a PowerPoint). The differential across the period is a mere 0.1 percentage points implying that macroeconomic and financial conditions have typically been neutral in supporting debt sustainability. However, this does mask some significant changes across the period. We observe a general downward trend in the ‘r – g’ differential from 1990 up to the time of the global financial crisis. Indeed between 2003 and 2007 we observe a favourable negative differential which helps to support the sustainability of public debt and therefore the well-being of the public finances. This downward trend of the ‘r – g’ differential was interrupted by the financial crisis, driven by a significant contraction in economic activity. This led to a positive spike in the differential of over 7 percentage points. Consequently, the negative ‘r – g’ differential meant that the public sector could continue to run primary deficits during the 2010s, despite the now much higher debt-to-GDP ratio. Yet, weak growth was placing limits on this. Chart 4 indeed shows that primary deficits fell across the decade (click here for a PowerPoint). The pandemic and beyond r – g’ differential again turn markedly positive, averaging 7 percentage points in the four quarters from Q2 of 2020. While the differential again turned negative, the debt-to-GDP ratio had also increased substantially because of large-scale fiscal interventions. This made the negative differential even more important for the sustainability of the public finances. The question is how long the negative differential can last. Looking forward, the fiscal arithmetic is indeed uncertain and worryingly is likely to be less favourable. Interest rates have risen and, although inflationary pressures may be easing somewhat, interest rates are likely to remain much higher than during the past decade. Geopolitical tensions and global fragmentation pose future inflationary concerns and a further drag on growth. As well as the short-term concerns over growth, there remain long-standing issues of low productivity which must be tackled if the growth of the UK economy’s potential output is to be raised. These concerns all point to the important ‘r – g’ differential become increasingly less negative, if not positive. If so the fiscal arithmetic could mean increasingly hard maths for policymakers. House of Commons Library (8/6/23) Peterson Institute for International Economics, Olivier Blanchard (6/11/23) ITV News, Robert Peston (13/7/23) Sky News, James Sillars (13/7/23) BBC News (20/10/23) BBC News, Vishala Sri-Pathma & Faisal Islam (4/10/23) Markets Insider, Filip De Mott (16/11/23) Financial Times, Matt King (17/11/23) The Guardian, Richard Partington (20/10/23) Financial Times, Martin Wolf (13/11/23) Office for Budget Responsibility Financial Times UK Debt Management Office Bank of England In his blog, The bond roller coaster, John looks at the pricing of government bonds and details how, in recent times, governments wishing to borrow by issuing new bonds are having to offer higher coupon rates to attract investors. The interest rate hikes by central banks in response to global-wide inflationary pressures have therefore spilt over into bond markets. Though this evidences the ‘pass through’ of central bank interest rate increases to the general structure of interest rates, it does, however, pose significant costs for governments as they seek to finance future budgetary deficits or refinance existing debts coming up to maturity. The Autumn Statement in the UK is scheduled to be made on 22 November. This, as well as providing an update on the economy and the public finances, is likely to include a number of fiscal proposals. It is thus timely to remind ourselves of the size of recent discretionary fiscal measures and their potential impact on the sustainability of the public finances. In this first of two blogs, we consider the former: the magnitude of recent discretionary fiscal policy changes. First, it is important to define what we mean by discretionary fiscal policy. It refers to deliberate changes in government spending or taxation. This needs to be distinguished from the concept of automatic stabilisers, which relate to those parts of government budgets that automatically result in an increase (decrease) of spending or a decrease (increase) in tax payments when the economy slows (quickens). The suitability of discretionary fiscal policy measures depends on the objectives they trying to fulfil. Discretionary measures can be implemented, for example, to affect levels of public-service provision, the distribution of income, levels of aggregate demand or to affect longer-term growth of aggregate supply. As we shall see in this blog, some of the large recent interventions have been conducted primarily to support and stabilise economic activity in the face of heightened economic volatility. The fiscal impulse The large-scale economic turbulence of recent years associated first with the global financial crisis of 2007–9 and then with the COVID-19 pandemic and the cost-of-living crisis, has seen governments respond with significant discretionary fiscal measures. During the COVID-19 pandemic, examples of fiscal interventions in the UK included the COVID-19 Business Interruption Loan Scheme (CBILS), grants for retail, hospitality and leisure businesses, the COVID-19 Job Retention Scheme (better known as the furlough scheme) and the Self-Employed Income Support Scheme. fiscal impulse. This captures the magnitude of change in discretionary fiscal policy and thus the size of the stimulus. The concept is not to be confused with fiscal multipliers, which measure the impact of fiscal changes on economic outcomes, such as real national income and employment. By measuring fiscal impulses, we can analyse the extent to which a country’s fiscal stance has tightened, loosened, or remained unchanged. In other words, we are attempting to capture discretionary fiscal policy changes that result in structural changes in the government budget and, therefore, in structural changes in spending and/or taxation. To measure structural changes in the public-sector’s budgetary position, we calculate changes in structural budget balances. A budget balance is simply the difference between receipts (largely taxation) and spending. A budget surplus occurs when receipts are greater than spending, while a deficit (sometimes referred to as net borrowing) occurs if spending is greater than receipts. A structural budget balance cyclically-adjusts receipts and spending and hence adjusts for the position of the economy in the business cycle. In doing so, it has the effect of adjusting both receipts and spending for the effect of automatic stabilisers. Another way of thinking about this is to ask what the balance between receipts and spending would be if the economy were operating at its potential output. A deterioration in a structural budget balance infers a rise in the structural deficit or fall in the structural surplus. This indicates a loosening of the fiscal stance. An improvement in the structural budget balance, by contrast, indicates a tightening. The size of UK fiscal impulses A frequently-used measure of the fiscal impulse involves the change in the cyclically-adjusted public-sector primary deficit. here for a PowerPoint of the chart.) The size of the fiscal impulse is measured by the year-on-year percentage point change in the cyclically-adjusted public-sector primary deficit as a percentage of potential GDP. A larger deficit or a smaller surplus indicates a fiscal loosening (a positive fiscal impulse), while a smaller deficit or a larger surplus indicates a fiscal tightening (a negative fiscal impulse). here for a PowerPoint of the chart.) In 2020 the cyclically-adjusted primary deficit to potential output ratio rose from 1.67 to 14.04 per cent. This represents a positive fiscal impulse of 12.4 per cent of GDP. A tightening of fiscal policy followed the waning of the pandemic. 2021 saw a negative fiscal impulse of 10.1 per cent of GDP. Subsequent tightening was tempered by policy measures to limit the impact on the private sector of the cost-of-living crisis, including the Energy Price Guarantee and Energy Bills Support Scheme. In comparison, the fiscal response to the global financial crisis led to a cumulative increase in the cyclically-adjusted primary deficit to potential GDP ratio from 2007 to 2009 of 5.0 percentage points. Hence, the financial crisis saw a positive fiscal impulse of 5 per cent of GDP. While smaller in comparison to the discretionary fiscal responses to the COVID-19 pandemic, it was, nonetheless, a sizeable loosening of the fiscal stance. Sustainability and well-being of the public finances The recent fiscal interventions have implications for the financial well-being of the public-sector. Not least, the financing of the positive fiscal impulses has led to a substantial growth in the accumulated size of the public-sector debt stock. At the end of 2006/7 the public-sector net debt stock was 35 per cent of GDP; at the end of the current financial year, 2023/24, it is expected to be 103 per cent. As we saw at the outset, in an environment of rising interest rates, the increase in the public-sector debt to GDP ratio creates significant additional costs for government, a situation that is made more difficult for government not only by the current flatlining of economic activity, but by the low underlying rate of economic growth seen since the financial crisis. The combination of higher interest rates and lower economic growth has adverse implications for the sustainability of the public finances and the ability of the public sector to absorb the effects of future economic crises. BBC News (16/11/23) House of Commons Library (13/11/23) Vox^EU, Niels Thygesen, Roel Beetsma, Massimo Bordignon, Xavier Debrun, Mateusz Szczurek, Martin Larch, Matthias Busse, Mateja Gabrijelcic, Laszlo Jankovics and Janis Malzubris (30/6/23) Reuters, Jan Strupczewski (28/6/23) Brookings, Eli Asdourian, Louise Sheiner, and Lorae Stojanovic (27/10/23) Institute for Fiscal Studies, Carl Emmerson, Paul Johnson and Ben Zaranko (eds) (October 2023) Office for Budget Responsibility Over the decades, economies have become increasingly interdependent. This process of globalisation has involved a growth in international trade, the spread of technology, integrated financial markets and international migration. When the global economy is growing, globalisation spreads the benefits around the world. However, when there are economic problems in one part of the world, this can spread like a contagion to other parts. This was clearly illustrated by the credit crunch of 2007–8. A crisis that started in the sub-prime market in the USA soon snowballed into a worldwide recession. More recently, the impact of Covid-19 on international supply chains has highlighted the dangers of relying on a highly globalised system of production and distribution. And more recently still, the war in Ukraine has shown the dangers of food and fuel dependency, with rapid rises in prices of basic essentials having a disproportionate effect on low-income countries and people on low incomes in richer countries. Moves towards autarky So is the answer for countries to become more self-sufficient – to adopt a policy of greater autarky? Several countries have moved in this direction. The USA under President Trump pursued a much more protectionist agenda than his predecessors. The UK, although seeking new post-Brexit trade relationships, has seen a reduction in trade as new barriers with the EU have reduced UK exports and imports as a percentage of GDP. Economic and Fiscal Outlook, Brexit will result in the UK’s trade intensity being 15 per cent lower in the long run than if it had remained in the EU. Many European countries are seeking to achieve greater energy self-sufficiency, both as a means of reducing reliance on Russian oil and gas, but also in pursuit of a green agenda, where a greater proportion of energy is generated from renewables. More generally, countries and companies are considering how to reduce the risks of relying on complex international supply chains. Limits to the gains from trade The gains from international trade stem partly from the law of comparative advantage, which states that greater levels of production can be achieved by countries specialising in and exporting those goods that can be produced at a lower opportunity cost and importing those in which they have a comparative disadvantage. Trade can also lead to the transfer of technology and a downward pressure on costs and prices through greater competition. Also, governments have been increasingly willing to support domestic industries with various non-tariff barriers to imports, especially since the 2007–8 financial crisis. Such measures include subsidies, favouring domestic firms in awarding government contracts and using regulations to restrict imports. These protectionist measures are often justified in terms of achieving security of supply. The arguments apply particularly starkly in the case of food. In the light of large price increases in the wake of the Ukraine war, many countries are considering how to increase food self-sufficiency, despite it being more costly. Also, trade in goods involves negative environmental externalities, as freight transport, whether by sea, air or land, involves emissions and can add to global warming. In 2021, shipping emitted over 830m tonnes of CO[2], which represents some 3% of world total CO[2] emissions. In 2019 (pre-pandemic), the figure was 800m tonnes. The closer geographically the trading partner, the lower these environmental costs are likely to be. here for a PowerPoint). Although trade as a percentage of GDP rose slightly from 2020 to 2021 as economies recovered from the pandemic, it is expected to have fallen back again in 2022 and possibly further in 2023. But despite this reduction in trade as a percentage of GDP, with de-globalisation likely to continue for some time, the world remains much more interdependent than in the more distant past (as the chart shows). Greater autarky may be seen as desirable by many countries as a response to the greater economic and political risks of the current world, but greater autarky is a long way from complete self-sufficiency. The world is likely to remain highly interdependent for the foreseeable future. Reports of the ‘death of globalisation’ are premature! BBC Radio 4, Ben Chu (November 2022) Spiked, Phil Mullen (7/3/22) Financial Times, Gideon Rachman (29/8/22) Jackson Hole Economics, Dani Rodrik (9/5/22) The Guardian (11/3/22) Council on Foreign Relations, Edward Alden (10/5/22) Geopolitical Futures, Fabrizio Maronta (5/12/22) Reason, Daniel W. Drezner (October 2022) Washington Post, Matthew Yglesias (13/3/22) MailOnline, Chris Jewers (4/1/23) WTO (7/7/22)
{"url":"https://pearsonblog.campaignserver.co.uk/tag/financial-crisis/","timestamp":"2024-11-02T23:34:26Z","content_type":"text/html","content_length":"224918","record_id":"<urn:uuid:017ed13f-db7e-4d77-ac76-2705ac6bf5ab>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00113.warc.gz"}
Guys, in the last couple of videos, we talked about kinetic friction. In this video, we're going to talk about the other type of friction, which is called static. They have some similarities, but static friction is a little bit more complicated. So let's check it out. So remember when we talked about kinetic friction, we said that this happens when the velocity is not equal to 0. You push a book and it's moving, and static, sorry. Kinetic friction tries to stop that object and bring it to a stop. Right? Static friction happens when the velocity is equal to 0. Imagine this book is really heavy and it's at rest on the table. What static friction tries to do is it tries to prevent an object from starting to move. So imagine this book was really, really heavy. You try to push it, and no matter how much you push, the book doesn't move. That's because of static friction. So the direction of kinetic friction, right, was always opposite to the direction of motion. Right? So you push this book across the table, it's going to the right. Kinetic friction opposes with to the left. Static friction's kind of similar except the direction is going to be opposite to where the object wants to move or would move without friction. So, this heavy book, right, you're pushing it without friction. It would move to the right. So static friction is going to oppose you by going to the left. So, this is \( f_s \). Lastly, let's talk about the formulas. So, the equation for \( f_k \), the kinetic friction, is the coefficient of kinetic friction times the normal. For static friction, it's very similar, so we're just going to use the coefficient of static friction times the normal. This coefficient of static friction is really just another number, just like \( \mu_k \) is. One thing you should know about this coefficient though is that it's always going to be greater than \( \mu_k \). Normally, they're going to be given to you. So, we've got this 5.1-kilogram block that's at rest on the floor. Now we're given the coefficients of static and kinetic friction. Like we just said, this is \( \mu_s \), and we'll see that it's actually greater than \( \mu_k \). What we're trying to do in this problem is we're trying to figure out the magnitude of the friction force on the block when we push it with these forces. So, this \( F \) is 20 and this \( F \) is 40. I'm just going to draw a quick sketch of the free body diagram. So, we have our \( mg \) that's downwards. We've already got our applied force, and there's our normal. Whether this object is moving or trying to move, we know that friction is going to oppose the by going to the left. We just don't know what type of friction it is. So which equation are we going to use? Are we going to use \( \mu_f k \), or are we going to use \( f_s \)? Well, if you think about this, this block is at rest on the floor, which means that the velocity is equal to 0. And we said that when the velocity is equal to 0, we're going to use the static friction formula. So our \( f_s \) is equal to \( \mu \) static times the normal. So that means our friction force here is going to be 0.6 times the normal force. Well, if this block is only sliding horizontally and we have two forces in the vertical, that means that they have to cancel. So that means that our \( n \) is equal to \( mg \). So that just means that we're going to use 5.1 times 9.8, and you'll get a friction force that's equal to 30 Newtons. So let's talk about this. You're pushing with 20 to the right, but the force that we calculated was 30. So, even though you're pushing to the right, the friction force would win, and the book would actually start accelerating to the left, in the direction of the friction. That's crazy. Doesn't make any sense. So, what's happening here? When we use this formula, this \( \mu_s \) times the normal, this is actually called a threshold. This is basically just the amount of force that you have to overcome to get an object to start moving. This times normal is the maximum value of static friction. So, what we do is we actually call this \( f_s \) max, and this is equal to times the normal. So, when we go back here, what we have to do is this static friction formula that we use is actually maximum static friction. This is basically just the threshold that we have to overcome in order to get an object to start moving. So, what happens is this threshold is not always the actual friction that's acting on an object. To determine whether we're dealing with static friction versus kinetic friction, what we always have to do in problems is we have to compare the forces to that static friction threshold. Basically, we have to figure out whether our force, \( F \), is strong enough to get an object moving. There are really just two options. You either don't or you do. So let's talk about those. If your \( F \) is not strong enough to get an object moving, that means your force is less than or equal to that static, that maximum, static friction. At that point, the object just stays at rest. It's not enough to get it moving. If the object stays at rest, then the friction is just going to be static friction. So basically, what happens is if you haven't yet crossed this threshold, which is kind of just like a number line here, where you have increasing force, then your static friction basically always has to balance out your force. What I mean by this is that if you're pushing with 10, your friction can't oppose you with stronger than you're pulling. So, that means that the static friction, in this case, is just 10. If you're pulling harder with 20, static friction opposes your pull with 20. If you're pulling with 30, static friction just opposes your pull with 30. It always knows how much you're pushing, and it always basically balances out your force so that the object stays at rest, and the acceleration is 0. Now what happens if you actually do overcome that threshold? Basically, if you have a strong enough force to get the object moving, then your force is greater than \( f_s \) max, and what happens here is that the object starts moving. And if it starts moving, then your friction switches from static and it becomes kinetic friction. So, what happens here is that this kinetic friction we already know is just equal to \( \mu_k \) times the normal. So let's go back to our problems here and figure out what's going on. So, what we're doing here is we're basically comparing our \( F \) to our \( f_s \) max. That's how we figure out which kind of friction we're dealing with. So our \( F \) is 20, and this is actually less than \( f_s \) max, which is equal to 30. So, what that means is that our friction force is going to be static friction, and it's just going to basically balance out our pull. So, our static friction is going to be 20. So the static friction here is going to be 20 Newtons, even though your maximum is 30. Now, in part b, we don't need to recalculate the maximum. We already know that \( f_s \) max is 30. But now we're actually pulling with 40. So basically, what happens here is that our \( F \) is equal to 40, it's greater than your \( f_s \) max, which is equal to 30, which means that the friction becomes kinetic friction. And so we can calculate this by using \( \mu_k \) times the normal. So basically, our kinetic friction force is going to be 0.3, that's the coefficient that we were given, times 5.1 times 9.8. And if you work this out, you're going to get 15 Newtons. So, what happens here is we've actually crossed that maximum static friction threshold. And so, therefore, the friction that's opposing this book is going to be kinetic, and it's going to be 15 Newtons. So those are the answers. Right? We have 20 Newtons when you're not pulling hard enough and then 15 once you've actually overcome. So basically, what this means is that once you pass this threshold, it actually doesn't matter how hard you pull because the friction force that's opposing you is just \( f_k \), and so this is just going to be 15. Even if you were to pull a little bit harder with 50 Newtons, it doesn't matter because this \( \mu_k \) times the normal is just a fixed value. So even though you're pulling with 50, kinetic friction would still oppose you with 15. Alright? So that's it for this one, guys. Hopefully, I made sense. Let me know if you have any questions.
{"url":"https://www.pearson.com/channels/physics/learn/patrick/forces-dynamics-part-2/static-friction?chapterId=8fc5c6a5","timestamp":"2024-11-14T01:09:59Z","content_type":"text/html","content_length":"523478","record_id":"<urn:uuid:50a3ae7f-b057-4bce-a24f-05e7ba143670>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00141.warc.gz"}
УДК 004.896 Baikadamov S.S. Kazakh-British Technical University (Almaty, Kazakhstan) Аннотация: this research examined and developed a Supervisory Control and Data Acquisition (SCADA) system for the monitoring of PID process control. In this instance, the test model was designed to be a DC-motor system. The SCADA system offered a methodfor tracking and reducing the control loop's uncertainty by recalibrating the parameters. The material resistance, or the swift changes in the environment, were typical examples of system dynamics changes that led to the control loop's uncertainty. Despite the fact that many PLCs feature tuning processes for adjusting these parameters, PLCs still require instruction on when to tune. Integrating artificial intelligence methods into the allocation of facility resources represents a significant advancement in facility management. By leveraging the fine-grained decision-making capabilities of AI, facilities can achieve higher levels of efficiency and adaptability, ensuring resources are responsive. It can be used optimally to suit the dynamic requirements of your environment. This approach not only improves operational performance, but also contributes to the broader goals of sustainability and user satisfaction. The effective allocation of resources within a facility is essential for improving operational efficiency and long-term viability. This article examines how genetic algorithms and particle swarm optimization can be used to create and improve Proportional-Integral-Derivative (PID) controllers. These controllers play a crucial role in regulating system behavior and meeting specific performance standards in different locations. Genetic algorithms are used to find the best PID settings by gradually improving solutions through generations, guaranteeing resilience and effectiveness. Particle swarm optimization enhances the solutions by imitating social behaviors observed in nature, assisting in reaching the optimal controller parameters. Comparative studies have shown that the hybrid approach is more effective in achieving quicker response times, reducing errors, and enhancing stability in various simulated situations. This work offers a structured approach for choosing the most appropriate PID controller settings and also adds valuable insights to the study of resource management in automated systems. Ключевые слова: SCADA, artificial intelligence, genetic algorithm, facility management. Although artificial intelligence research has shown encouraging results, the construction industry currently lacks many applications. The efficiency of AI approaches in routine operations and maintenance has not yet been fully tapped into by Facility Management (FM) in the construction industry. An infrequent HVAC problem can result in a significant financial loss because HVAC is such an important aspect of Facility Management and Maintenance (FMM) operations. By reducing energy consumption, planning maintenance, and monitoring equipment, the implementation of AI approaches in FMM can optimize building performance, particularly in predictive maintenance. In the last ten years, artificial intelligence has advanced dramatically, causing fundamental shifts in technical paradigms across a range of industries, from electronics to medical research. AI techniques can analyze massive amounts of data with accuracy levels so high that they outperform human performance and productivity while also utilizing less time and money. With their applications for autonomous driving and automatically sensing traffic lights, stop signs, obstructions, and other things, those innovations completely changed the automotive industry. AI is also being utilized in automation in the construction sector to detect individuals nearby heavy equipment to increase safety. The adoption of AI techniques in the construction industry has increased due to the high performance levels of computer vision techniques like Deep Learning and Fuzzy Logic. I Literature review. The first researchers in the list from University of Bremen [1] also noted the lack of practical application in production control and deal with use of artificial neural networks as control system for shop floor environment. Next one [2] evaluated several machine learning methods and tried to prove that Deep learning is the best suitable for sphere of automation. Research [3] and [4] consist of similar ideas of automation building re- sources, for Sport facilities and Aquaponics facilities respectfully. This would help us to understand how the type and amount of resources effect our control system performance. The most recent researches like [5] or [6] provide case study of real SCADA control system. AI techniques are being used to solve device diagnostic and preventive maintenance problems using data from field devices. As it can be seen, there are several excellent researches all over the net, however they are speaking in different languages. Currently, there are no paper about applying above mentioned AI methods to the real working control system. Either its only theoretical or its only regarding manual monitoring of facility resources. In this paper we will try to combine both sides. The importance of applying intelligent distribution to production facilities by developing complex supersystems with interconnected subsystems could aid in realtime online monitoring and control [7]. Using machine learning technology, energy hubs can efficiently balance energy supply and demand, ensuring optimal use of renewable energy sources while minimizing dependence on traditional energy sources [8]. Facilities management is a set of solutions that help minimize the time and resources spent on real estate management issues and extend the life of buildings and technical systems [9]. The problem statement of the research is formulated as follows: it is necessary to study the PID controller behavior configured by several artificial intelligence methods in order to reach desired value of DC-motor output. II Research methods. In this work, first of all, the plan is to come up with control system that collects data from various field equipment such as sensors, tanks, load resistors and willing to pro- vide it some Human-Machine Interface (HMI). Next step is to input that information to AI source methods. Then we will try to train our artificial intelligence to appropriately react with certain output. For example, if the liquid in the tank is getting low, corresponding action should be taken fulfill it back. In this section would be explained the main three research methods such as genetic algorithm, fuzzy logic and particle swarm optimization. a) Genetic algorithm description. The genetic аlgorithm (GА) is an advanced optimization method based on the principles of natural selection and genetic mechanisms. Its goal is to provide approximate solutions to complex problems that may be difficult to solve using traditional methods. The algorithm simulates the evolutionary process by selecting the most suitable individuals for reproduction, thus producing offspring for subsequent generations. The main components of a GA include a set of possible solutions, selection based on fitness, crossover (recombination), and mutation. Through iterative generations, the population gradually evolves toward an optimal solution. In the Figure represented the various possibilities of genetic algorithm's application: Resource allocation Optimization problems Figure 1. Genetic algorithm's application. From the figure above it is clear that genetic algorithms are very effective in optimizing the intelligent allocation of plant resources such as logistics, production, and service management. b) Particle swarm optimization. Particle Swarm Optimization is a metaheuristic optimization algorithm inspired by the social behavior of flocks of birds and schools of fish. It is often used to solve optimization problems, such as those encountered in research on diagnostics of industrial equipment. Here is the Figure 2, where the simple flowchart of PSO process is shown: Modification of every searching poinl Iteration = iteration + 1 С Ç END ) Figure 2. Particle swarm optimization flowchart. Below is a detailed description of the advantages of PSO: - Simple and User-Friendly: PSO is easy to understand and implement and requires fewer lines of code compared to other methods of optimization. Only a few parameters need to be added, such as particles' number, the main coefficients and the inertial weight. - Versatility: PSO could be used to solve a various type of optimization task including continuous, discrete, and multimodal functions. It can easily be combined with other optimization techniques to improve performance. - Robustness: Due to the stochastic search process, PSO is less prone to being trapped in a local optimum compared to other methods. It handles noisy and dynamic optimization problems well. - Parallelism: PSO is inherently parallel and can be efficiently implemented in parallel and distributed computing systems, resulting in faster convergence. c) PID controller. PID controller is a fundamental component of feedback control loops and is used to automatically adjust a process variable to maintain it at a desired setpoint. This mechanism is common in approximately 90% of all automatic control systems due to its versatility and efficiency. The PID algorithm calculates a control signal that consists of three different terms: proportional, integral and derivative. These three terms collectively define the corrective actions required to bring the process variable back into an acceptable range, hence the name PID. Working Mechanism of a PID Controller: to understand the functionality of a PID controller, it is important to understand the dynamics of a feedback system. The heart of this system is a PID controller, which can be a separate device or an algorithm running on a microcontroller. The key parameter to be monitored is called a process variable. This variable can represent temperature, flow rate, pressure, rotation speed, or any other measurable characteristic of the system. A sensor is used to measure a process variable and send that information back to the controller, producing ЭС 1 rUIW I Control signal Figure 3. Flowchart of the feedback based PID control system. The controller is programmed with a desired value or setpoint for a process variable, which is the goal the system is trying to achieve. III Simulation modelling. A simulation is run with the DC-Motor that has the following specifications: 2hp, 230 v, 8.5 amperes, 1500 rpm Ra(Armature resistance) = 2.45 H, La(Armature Inductance) = 0.035 H, Kb(Back EMF) = 1.2 Vs/rad, Jm(Moment of Inertia) = 0.022 kgm2, Bm (Frictional Constant) = 0.5 x 10-3(NmS/rad). The transfer function of DC-Motor is given by below: 6(s) _ _12_ 7а(5) 0.00077S3 + 0.0539S2 + 1.441S , (4) Model of DC-Motor in Simulink, which is application of Matlab for modelling, is shows below: Figure 4. Model of DC-Motor with PID controller in Simulink. Step - step function which is used for providing input to our system, values between 0 and 1. K(1,2,3) - coefficients of PID controller, which we going to manipulate using different AI methods, in order to gain better results. DC-Motor - Transfer function of real DC-Motor, which was described earlier in above sections. ITAE (Integral Time Absolute Error) - absolute time of when output signal reaches and stabilizes around our set value. Main metric based on which we going to evaluate the result at each generation of current AI method. Output - scope function which contain and plots the result of current generation in a graph way. Genetic algorithm implementation. Coding of GA in Matlab workspace is shown below: Figure 5. Coding of GA in Matlab working area. 'ga' - main function of Genetic Algorithm, 'no_var' - number of variables which we manipulate during simulation, in order to minimize final output, 'lb' - lower minimal value, if algorithm reaches it simulation would stop, 'ub' - upper maximum value, if algorithm reaches it simulation would stop, 'ga_opt' - parameters of gen_algorithm, in this case its set to 50 generations with 50 population in each, k,best - would collect final result each time and output the best one at the end of simulation, b) PSO implementing. Coding of PSO in Matlab workspace is shown below: Figure 6. Coding of PSO in Matlab working area. 'particleswarm' - main function of Particle Swarm Optimization, built in 'no_var' - number of variables which we manipulate during simulation, in order to minimize final output, 'lb' - lower minimal value, if algorithm reaches it simulation would stop, 'ub' - upper maximum value, if algorithm reaches it simulation would stop, 'PSO_opt' - parameters of gen_algorithm, in this case its set to 50 generations with 50 population in each, k,best - would collect final result each time and output the best one at the end of simulation. IV. Simulation Results. a) Genetic algorithm results Best value for generic algorithm, which parameters were set as 50 generations with 50 population size in each, are shown below: - Kp = 0.9112, - Ki = 0, - Kd = 0.3868, - ITAE (best) = 3.6456 Review graph of 50 generations containing fitness function values is shown in Figure below: Figure 7. Genetic algorithm - 50 generating representation. The chart shows how an evolutionary algorithm performed over 50 generations. The algorithm quickly reaches a stable solution in the initial generations, with both the highest and average fitness values settling around zero. This means that the algorithm rapidly identifies a solution that is either optimal or very close to optimal, and continues to keep this solution stable with minimal changes in future iterations. Graphical representation of best result for Genetic Algorithm is shown below: 1 ГЗ I I l I i i i i 1.02 1 \ 0.98 0.96 0.94 \ / [ 1 2 4 f 3 i 3 10 12 14 16 18 20 |Ready Sample based T=20.000 Figure 8. Genetic algorithm result. The graph in Figure 8 displays a significant rise in value at the start (approximately at time 2), reaching a maximum just over 1.04. This shows that the Genetic Algorithm promptly identified a solution that greatly enhanced the fitness value. After reaching the highest point, the value decreases and fluctuates. The initial decrease goes below 0.98 at around time 5, and is then followed by a subsequent increase leading to a smaller peak just above 1 at around time 7. This back and forth movement indicates that the algorithm is continuously fine-tuning the solution in order to find the best possible balance. Stabilization occurs as time goes on, with the value eventually leveling off near 1. Starting at age 10, the value stays relatively constant with very small changes, suggesting that the genetic algorithm has reached a stable and final answer. The chart shows how the highest achievement in a Genetic Algorithm changes over a period of time. At first, there is a quick progress, but then there are fluctuations as the algorithm fine-tunes the solution. After some time, the value remains constant, indicating that the GA has reached a near-perfect solution through convergence. The first peak and following fluctuations demonstrate the exploratory aspect of the GA, while the final stabilization shows that it has successfully converged. b) Particle Swarm Optimization. Best value for generic algorithm, which parameters were set as 50 iterations with 50 swarms in each, are shown below: - Kp = 0.9068, - Ki = 0, - Kd = 0.3800, - ITAE (best) = 3.6446 Review graph of 50 generations containing fitness function values is shown in Figure below: X101S Best Function Value: 3.64457 12 r 10 -8 6 -4 -2 Figure 9. PSO 50 generations implementing. From the Figure 9 is clear, that at the start of iteration 0, the function value is very elevated. This suggests that the original solution or particle swarm possessed a greatly superior fitness value. Starting from the initial iteration, the function value significantly decreases to almost zero. This steep decrease indicates that the PSO promptly discovered much improved solutions, causing a significant drop in the function value. Graphical representation of best result for PSO is shown below in Figure 9: 1 0.8 0.6 0.4 0.2 T Ready Sample based T=20.000 Figure 10. The PSO application result. The Figure 10 shows how the PSO algorithm performed over a 20-unit time period. At first, the level of fitness increases quickly, showing a rapid enhancement in the solution. Afterward, there is a small back and forth movement as the algorithm refines the solution. After some time, the fitness value settles near 1, indicating that the PSO has reached an optimal or nearly optimal solution and remains stable with minimal changes. This usual conduct of PSO shows its efficiency in quickly identifying and maintaining a top-tier solution. The chart illustrates a steep rise in the fitness score, starting at zero and peaking just above 1 at approximately time 4. This shows that the PSO algorithm efficiently found a much-improved solution at the beginning of the process. Movement and balance: oscillation and maintaining stability. Following the initial peak, there is a small fluctuation as the fitness value decreases slightly below 1 at time 6 before leveling off. This conduct indicates that the PSO is improving the solution and fine-tuning the swarm's location in order to discover the best possible outcome. From approximately age 8 onwards, the fitness value stays fairly consistent with very minimal fluctuations, suggesting that the algorithm has reached a solution. Continued or constant state The fitness value hovers around 1 for most of the time period, from approximately time 8 to 20. This stable condition indicates that the PSO algorithm has discovered an optimal or very close to optimal solution and is keeping it with minimal variation. VI Conclusion. Currently, factory automation uses a PLC-based automatic control system and develops its capabilities as technology advances. To improve the productivity of small and medium-sized factories worldwide, it is crucial to study control systems and SCADA. In this study, we studied so called "nature based" AI algorithms, like Genetic algorithm and Particle Swarm Optimization. We described pros and cons of each method and were able to apply Genetic Algorithm to the DC-motor model. Then we did the same with Particle Swarm Optimization. We haven't tried to combine these two artificial intelligence methods for the particular reason of comparing it to each other. In conclusion, we can say that SCADA system control can enable the development of an automatic process up to the expertise and technology gained from general program application. And those operations are very sensitive when it comes to PID tuning. Artificial intelligence methods, particularly Genetic Algorithm and Particle Swarm Optimization have shown perspective results in minimizing both time and error values. Fuzzy Logic Controller in its case has shown lower results and need to mention that configuration of Fuzzy Logic itself is complicated and requires high experience. 1. Elizabeth Bautista, Melissa Romanus, Thomas Davis, Cary Whitney, and Theodore Kubaska. Collecting, monitoring, and analyzing facility and systems data at the national energy research scientific computing center // Association for Computing Machinery.- 2019.-Vol. 8.- P. 34-75; 2. Ji-Hyoung Chin, Chanwook Do, and Minjung Kim. How to increase sport facility users' intention to use ai fitness services: Based on the technology adoption model // International Journal of Environmental Research and Public Health.- 2022.-Vol.19.-P. 44 - 53; 3. Mariam Elnour, Yassine Himeur, Fodil Fadli, Hamdi Mohammedsherif, Nader Meskin, Ahmad M. Ahmad, loan Petri, Yacine Rezgui, and Andrei Hodorog. Neural network-based model predictive control system for optimizing building automation and management systems of sports facilities // Applied Energy.- 2022.- P. 31; 4. Sara Masoud, Bijoy Dripta Barua Chowdhury, Young Jun Son, Chieri Kubota, and Russell Tronstad. Simulation based optimization of resource allocation and facility layout for vegetable grafting operations // Computers and Electronics in Agriculture.-2019.- Vol.2.- P. 163; 5. Nico. Mastorakis. Recent researches in circuits, systems, communications and computers // European Conference of Computer Science.- 2022; 6. Teerawat Thepmanee, Sawai Pongswatd, Farzin Asadi, and Prapart Ukakimaparn. Implementation of control and scada system: Case study of allen bradley plc by using wirelesshart to temperature control and device diagnostic // Energy Reports.- 2022.- Vol. 8.- P. 934-941; 7. Anastasiia Shchurenko. Building an ecosystem and infrastructure for smart confectionery production.- 2024.- Vol. 3.- DOI: 10.32370/IAJ.3053; 8. Magdy Tawfik, Ahmed S. Shehata, Amr Ali Hassan, Mohamed A. Kotb. Introducing Optimal Energy Hub Approach in Smart Green Ports based on Machine Learning Methodology.- 2023.- Vol. 1.- DOI: 10.21203/ 9. Karolina Viduto. Smart technologies in the field of the facility management // Mokslas - Lietuvos ateitis journal.-2021
{"url":"https://cyberleninka.ru/article/n/distribution-of-facility-resources-based-on-combination-of-ai-methods-and-pid-controller","timestamp":"2024-11-03T14:13:23Z","content_type":"application/xhtml+xml","content_length":"88111","record_id":"<urn:uuid:392b5bb0-05c0-4e6a-9210-2fc9613f5c51>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00610.warc.gz"}
5 Best Ways to Handle Octal Digits as Strings in Python π ‘ Problem Formulation: Imagine you need to work with octal number representations in Python, where octal numbers are provided as strings. Your goal is to process these strings in various ways, such as converting them to integers or performing arithmetic operations. For instance, given the input string "345" (representing the octal number 345), you may want the integer output 229, which is its decimal equivalent. Method 1: Using int() with Base 8 This method employs the built-in int() function to convert a string containing an octal number into its decimal equivalent. The int() function takes two arguments: the string to convert and the base of the numeral system (in this case, 8 for octal). It’s straightforward, efficient, and the go-to method for octal string conversion. Here’s an example: octal_string = "345" decimal_number = int(octal_string, 8) This snippet takes the string "345", which represents an octal number, and converts it to its decimal equivalent using the int() function with 8 as the base. The output is 229, the decimal form of the octal number 345. Method 2: Using Octal Literals with the prefix “0o” In Python, octal literals can be represented by prefixing the number with '0o' or '0O'. This method is useful when you have a string and want to directly execute arithmetic without converting to an integer first. This approach is commonly used when the octal value is known at write-time and doesn’t need to be dynamically converted from a string. Here’s an example: octal_string = '0o345' decimal_number = eval(octal_string) The code uses the eval() function on a string that includes the octal literal (prefixed with '0o'). It evaluates the string as a Python expression and returns the decimal equivalent. Method 3: Formatting with f-Strings or format() Python 3.6 introduced f-strings, offering a convenient way to embed expressions inside string literals. You can use f-strings or the format() function to convert an octal string to its decimal equivalent by specifying the conversion type. This method is more useful for formatting purposes rather than just conversion. Here’s an example: octal_string = "345" decimal_number = f"{int(octal_string, 8)}" This code explicitly converts the octal string to an integer and then uses an f-string to embed that integer within the string. The output remains the same, 229, displayed as a string. Method 4: Using Octal String in Calculations This method involves converting the octal string to a decimal integer and then using it directly in arithmetic operations. It’s practical for scenarios where you need to manipulate octal numbers mathematically after their conversion. Here’s an example: octal_string = "345" decimal_number = int(octal_string, 8) sum_result = decimal_number + 5 In this snippet, after converting the octal string to a decimal integer, it adds 5 to the resulting integer. The final output 234 is the sum of the decimal version of the octal number 345 plus 5. Bonus One-Liner Method 5: List Comprehension and Join For a quick and dirty solution to convert an octal string to a decimal string without directly returning an integer, you could use a combination of list comprehension, the int() function, and str.join(). This is more of a Python trick and less practical for production code. Here’s an example: octal_string = "345" decimal_string = ''.join(str(int(char, 8)) for char in octal_string) This code snippet cleverly uses list comprehension to convert each character of the octal string into its decimal representation and then joins them together to form the final decimal string. It’s a creative, albeit convoluted, method of accomplishing the task. • Method 1: Using int() with base 8. Strengths: Simple, straightforward, efficient. Weaknesses: Only works for valid octal strings, doesn’t handle invalid formats. • Method 2: Using Octal Literals. Strengths: Easy for known literals. Weaknesses: Inconvenient for dynamic string handling, possible security risk with eval(). • Method 3: Formatting with f-Strings or format(). Strengths: Great for embedding in strings, versatile. Weaknesses: Overhead of string operations, may be less intuitive for non-string outputs. • Method 4: Using Octal String in Calculations. Strengths: Good for direct arithmetic operations post-conversion. Weaknesses: Requires explicit conversion step. • Method 5: List Comprehension and Join. Strengths: One-liner trick for specific use-cases. Weaknesses: Not practical for general use, can be unclear and inefficient.
{"url":"https://blog.finxter.com/5-best-ways-to-handle-octal-digits-as-strings-in-python/","timestamp":"2024-11-06T11:48:21Z","content_type":"text/html","content_length":"72155","record_id":"<urn:uuid:8e561539-5a85-4a6a-9935-72ce0ee620b3>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00227.warc.gz"}
Algebraic variety From Encyclopedia of Mathematics One of the principal objects of study in algebraic geometry. The modern definition of an algebraic variety as a reduced scheme of finite type over a field $k$ is the result of a long evolution. The classical definition of an algebraic variety was limited to affine and projective algebraic sets over the fields of real or complex numbers (cf. Affine algebraic set; Projective algebraic set). As a result of the studies initiated in the late 1920s by B.L. van der Waerden, E. Noether and others, the concept of an algebraic variety was subjected to significant algebraization, which made it possible to consider algebraic varieties over arbitrary fields. A. Weil [6] applied the idea of the construction of differentiable manifolds by glueing to algebraic varieties. An abstract algebraic variety is obtained in this way and is defined as a system $(V_\alpha)$ of affine algebraic sets over a field $k$, in each one of which open subsets $W_{\alpha\beta}\subset V_\alpha$, corresponding to the isomorphic open subsets $W_{\alpha\beta}\subset V_\beta$, are chosen. All basic concepts of classical algebraic geometry could be transferred to such varieties. Examples of abstract algebraic varieties, non-isomorphic to algebraic subsets of a projective space, were subsequently constructed by M. Nagata and H. Hironaka [2], [3]. They used complete algebraic varieties (cf. Complete algebraic variety) as the analogues of projective algebraic sets. J.-P. Serre [5] has noted that the unified definition of differentiable manifolds and analytic spaces as ringed topological spaces has its analogue in algebraic geometry as well. Accordingly, algebraic varieties were defined as ringed spaces (cf. Ringed space), locally isomorphic to an affine algebraic set over a field $k$ with the Zariski topology and with a sheaf of germs of regular functions on it. The supplementary structure of a ringed space on an algebraic variety makes it possible to simplify various constructions with abstract algebraic varieties, and study them using methods of homological algebra which involve sheaf theory. At the International Mathematical Congress in Edinburgh in 1958, A. Grothendieck outlined the possibilities of a further generalization of the concept of an algebraic variety by relating it to the theory of schemes. After the foundations of this theory had been established [4] a new meaning was imparted to algebraic varieties — viz. that of reduced schemes of finite type over a field $k$, such affine (or projective) schemes became known as affine (or projective) varieties (cf. Scheme; Reduced scheme). The inclusion of algebraic varieties in the broader framework of schemes also proved useful in a number of problems in algebraic geometry ([[Resolution of singularities|resolution of singularities]]; the moduli problem, etc.). Another generalization of the concept of an algebraic variety is related to the concept of an algebraic space. Any algebraic variety over the field of complex numbers has the structure of a complex analytic space, which makes it possible to use topological and transcendental methods in its study (cf. Kähler Many problems in number theory (the theory of congruences, Diophantine equations, modular forms, etc.) involve the study of algebraic varieties over finite fields and over algebraic number fields (cf. Algebraic varieties, arithmetic of; Diophantine geometry; Zeta-function in algebraic geometry). [1] M. Baldassarri, "Algebraic varieties" , Springer (1956) MR0082172 Zbl 0995.14003 Zbl 0075.15902 [2] I.R. Shafarevich, "Basic algebraic geometry" , Springer (1977) (Translated from Russian) MR0447223 Zbl 0362.14001 [3] I.V. Dolgachev, "Abstract algebraic geometry" J. Soviet Math. , 2 : 3 (1974) pp. 264–303 Itogi Nauk. i Tekhn. Algebra Topol. Geom. , 10 (1972) pp. 47–112 Zbl 1068.14059 [4] A. Grothendieck, J. Dieudonné, "Eléments de géométrie algébrique" Publ. Math. IHES , 4 (1960) MR0217083 MR0163908 Zbl 0118.36206 [5] J.-P. Serre, "Faisceaux algébriques cohérents" Ann. of Math. (2) , 61 : 2 (1955) pp. 197–278 Zbl 0067.16201 [6] A. Weil, "Foundations of algebraic geometry" , Amer. Math. Soc. (1946) MR0023093 Zbl 0063.08198 How to Cite This Entry: Algebraic variety. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Algebraic_variety&oldid=52799 This article was adapted from an original article by I.V. Dolgachev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
{"url":"https://encyclopediaofmath.org/wiki/Algebraic_variety","timestamp":"2024-11-03T03:16:44Z","content_type":"text/html","content_length":"20449","record_id":"<urn:uuid:af376951-f355-418d-9793-9bd7be336b70>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00490.warc.gz"}
KLL Sketch KLL Sketch Implementation of a very compact quantiles sketch with lazy compaction scheme and nearly optimal accuracy per retained item. See Optimal Quantile Approximation in Streams. This is a stochastic streaming sketch that enables near real-time analysis of the approximate distribution of items from a very large stream in a single pass, requiring only that the items are comparable. The analysis is obtained using get_quantile() function or the inverse functions get_rank(), get_pmf() (Probability Mass Function), and get_cdf() (Cumulative Distribution Function). As of May 2020, this implementation produces serialized sketches which are binary-compatible with the equivalent Java implementation only when template parameter T = float (32-bit single precision Given an input stream of N items, the natural rank of any specific item is defined as its index (1 to N) in inclusive mode or (0 to N-1) in exclusive mode in the hypothetical sorted stream of all N input items. The normalized rank (rank) of any specific item is defined as its natural rank divided by N. Thus, the normalized rank is between zero and one. In the documentation for this sketch natural rank is never used so any reference to just rank should be interpreted to mean normalized rank. This sketch is configured with a parameter k, which affects the size of the sketch and its estimation error. The estimation error is commonly called epsilon (or eps) and is a fraction between zero and one. Larger values of k result in smaller values of epsilon. Epsilon is always with respect to the rank and cannot be applied to the corresponding items. The relationship between the normalized rank and the corresponding items can be viewed as a two-dimensional monotonic plot with the normalized rank on one axis and the corresponding items on the other axis. If the y-axis is specified as the item-axis and the x-axis as the normalized rank, then y = get_quantile(x) is a monotonically increasing function. The function get_quantile(rank) translates ranks into corresponding quantiles. The functions get_rank(item), get_cdf(…) (Cumulative Distribution Function), and get_pmf(…) (Probability Mass Function) perform the opposite operation and translate items into ranks. The get_pmf(…) function has about 13 to 47% worse rank error (depending on k) than the other queries because the mass of each “bin” of the PMF has “double-sided” error from the upper and lower edges of the bin as a result of a subtraction, as the errors from the two edges can sometimes add. The default k of 200 yields a “single-sided” epsilon of about 1.33% and a “double-sided” (PMF) epsilon of about 1.65%. A get_quantile(rank) query has the following guarantees: - Let q = get_quantile(r) where r is the rank between zero and one. - The quantile q will be an item from the input stream. - Let true_rank be the true rank of q derived from the hypothetical sorted stream of all N items. - Let eps = get_normalized_rank_error(false). - Then r - eps ≤ true_rank ≤ r + eps with a confidence of 99%. Note that the error is on the rank, not the quantile. A get_rank(item) query has the following guarantees: - Let r = get_rank(i) where i is an item between the min and max items of the input stream. - Let true_rank be the true rank of i derived from the hypothetical sorted stream of all N items. - Let eps = get_normalized_rank_error(false). - Then r - eps ≤ true_rank ≤ r + eps with a confidence of 99%. A get_pmf() query has the following guarantees: - Let {r1, r2, …, r(m+1)} = get_pmf(s1, s2, …, sm) where s1, s2 are split points (items from the input domain) between the min and max items of the input stream. - Let mass_i = estimated mass between s_i and s_i+1. - Let true_mass be the true mass between the items of s_i, s_i+1 derived from the hypothetical sorted stream of all N items. - Let eps = get_normalized_rank_error(true). - then mass - eps ≤ true_mass ≤ mass + eps with a confidence of 99%. - r(m+1) includes the mass of all points larger than s_m. A get_cdf(…) query has the following guarantees; - Let {r1, r2, …, r(m+1)} = get_cdf(s1, s2, …, sm) where s1, s2, … are split points (items from the input domain) between the min and max items of the input stream. - Let mass_i = r_(i+1) - r_i. - Let true_mass be the true mass between the true ranks of s_i, s_i+1 derived from the hypothetical sorted stream of all N items. - Let eps = get_normalized_rank_error(true). - then mass - eps ≤ true_mass ≤ mass + eps with a confidence of 99%. - 1 - r(m+1) includes the mass of all points larger than s_m. From the above, it might seem like we could make some estimates to bound the item returned from a call to get_quantile(). The sketch, however, does not let us derive error bounds or confidences around items. Because errors are independent, we can approximately bracket a value as shown below, but there are no error estimates available. Additionally, the interval may be quite large for certain distributions. - Let q = get_quantile(r), the estimated quantile of rank r. - Let eps = get_normalized_rank_error(false). - Let q_lo = estimated quantile of rank (r - eps). - Let q_hi = estimated quantile of rank (r + eps). - Then q_lo ≤ q ≤ q_hi, with 99% confidence.
{"url":"https://apache.github.io/datasketches-python/main/quantiles/kll.html","timestamp":"2024-11-14T04:33:29Z","content_type":"text/html","content_length":"89075","record_id":"<urn:uuid:c0892fc1-4b0a-4428-a820-cc76e2163a32>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00343.warc.gz"}
Multiply and Divide Algebraic Fractions Resources for Multiply and Divide Algebraic Fractions • Questions Click Here • Video Tutorials Click Here Videos relating to Multiply and Divide Algebraic Fractions. Multiply and Divide Algebraic Fractions - Video - Algebraic Fractions - Multiplying & Dividing You must be logged in to access this resource Multiply and Divide Algebraic Fractions - Video - Algebraic Fractions | Multiply & Divide You must be logged in to access this resource Plans & Pricing With all subscriptions, you will receive the below benefits and unlock all answers and fully worked solutions. • All Content All courses, all topics Your own personal portal Exam Revision Revision by Topic • Content Any course, any topic Your own personal portal Exam Revision Revision by Topic
{"url":"https://classmathematics.com.au/resources/nsw/year-9/maths-advanced/algebra-expanding-and-factorising/multiply-and-divide-algebraic-fractions/","timestamp":"2024-11-05T22:00:50Z","content_type":"text/html","content_length":"59548","record_id":"<urn:uuid:a41c7dcf-fb18-43f8-9629-5670ccd91714>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00326.warc.gz"}
gee: Function to solve a Generalized Estimation Equation Model in gee: Generalized Estimation Equation Solver Produces an object of class "gee" which is a Generalized Estimation Equation fit of the data. gee(formula, id, data, subset, na.action, R = NULL, b = NULL, tol = 0.001, maxiter = 25, family = gaussian, corstr = "independence", Mv = 1, silent = TRUE, contrasts = NULL, scale.fix = FALSE, scale.value = 1, v4.4compat = FALSE) formula a formula expression as for other regression models, of the form response ~ predictors. See the documentation of lm and formula for details. a vector which identifies the clusters. The length of id should be the same as the number of observations. Data are assumed to be sorted so that observations on a cluster are contiguous id rows for all entities in the formula. data an optional data frame in which to interpret the variables occurring in the formula, along with the id and n variables. expression saying which subset of the rows of the data should be used in the fit. This can be a logical vector (which is replicated to have length equal to the number of observations), or subset a numeric vector indicating which observation numbers are to be included, or a character vector of the row names to be included. All observations are included by default. na.action a function to filter missing data. For gee only na.omit should be used here. R a square matrix of dimension maximum cluster size containing the user specified correlation. This is only appropriate if corstr = "fixed". b an initial estimate for the parameters. tol the tolerance used in the fitting algorithm. maxiter the maximum number of iterations. a family object: a list of functions and expressions for defining link and variance functions. Families supported in gee are gaussian, binomial, poisson, Gamma, and quasi; see the glm and family family documentation. Some links are not currently available: 1/mu^2 and sqrt have not been hard-coded in the ‘cgee’ engine at present. The inverse gaussian variance function is not available. All combinations of remaining functions can be obtained either by family selection or by the use of quasi. corstr a character string specifying the correlation structure. The following are permitted: "independence", "fixed", "stat_M_dep", "non_stat_M_dep", "exchangeable", "AR-M" and "unstructured" Mv When corstr is "stat_M_dep", "non_stat_M_dep", or "AR-M" then Mv must be specified. silent a logical variable controlling whether parameter estimates at each iteration are printed. a list giving contrasts for some or all of the factors appearing in the model formula. The elements of the list should have the same name as the variable and should be either a contrast contrasts matrix (specifically, any full-rank matrix with as many rows as there are levels in the factor), or else a function to compute such a matrix given the number of levels. scale.fix a logical variable; if true, the scale parameter is fixed at the value of scale.value. scale.value numeric variable giving the value to which the scale parameter should be fixed; used only if scale.fix == TRUE. v4.4compat logical variable requesting compatibility of correlation parameter estimates with previous versions; the current version revises to be more faithful to the Liang and Zeger (1986) proposals (compatible with the Groemping SAS macro, version 2.03) a formula expression as for other regression models, of the form response ~ predictors. See the documentation of lm and formula for details. a vector which identifies the clusters. The length of id should be the same as the number of observations. Data are assumed to be sorted so that observations on a cluster are contiguous rows for all entities in the formula. an optional data frame in which to interpret the variables occurring in the formula, along with the id and n variables. expression saying which subset of the rows of the data should be used in the fit. This can be a logical vector (which is replicated to have length equal to the number of observations), or a numeric vector indicating which observation numbers are to be included, or a character vector of the row names to be included. All observations are included by default. a function to filter missing data. For gee only na.omit should be used here. a square matrix of dimension maximum cluster size containing the user specified correlation. This is only appropriate if corstr = "fixed". a family object: a list of functions and expressions for defining link and variance functions. Families supported in gee are gaussian, binomial, poisson, Gamma, and quasi; see the glm and family documentation. Some links are not currently available: 1/mu^2 and sqrt have not been hard-coded in the ‘cgee’ engine at present. The inverse gaussian variance function is not available. All combinations of remaining functions can be obtained either by family selection or by the use of quasi. a character string specifying the correlation structure. The following are permitted: "independence", "fixed", "stat_M_dep", "non_stat_M_dep", "exchangeable", "AR-M" and "unstructured" When corstr is "stat_M_dep", "non_stat_M_dep", or "AR-M" then Mv must be specified. a logical variable controlling whether parameter estimates at each iteration are printed. a list giving contrasts for some or all of the factors appearing in the model formula. The elements of the list should have the same name as the variable and should be either a contrast matrix (specifically, any full-rank matrix with as many rows as there are levels in the factor), or else a function to compute such a matrix given the number of levels. a logical variable; if true, the scale parameter is fixed at the value of scale.value. numeric variable giving the value to which the scale parameter should be fixed; used only if scale.fix == TRUE. logical variable requesting compatibility of correlation parameter estimates with previous versions; the current version revises to be more faithful to the Liang and Zeger (1986) proposals (compatible with the Groemping SAS macro, version 2.03) Though input data need not be sorted by the variable named "id", the program will interpret physically contiguous records possessing the same value of id as members of the same cluster. Thus it is possible to use the following vector as an id vector to discriminate 4 clusters of size 4: c(0,0,0,0,1,1,1,1,0,0,0,0,1,1,1,1). Offsets must be specified in the model formula, as in glm. This is version 4.8 of this user documentation file, revised 98/01/27. The assistance of Dr B Ripley is gratefully acknowledged. Liang, K.Y. and Zeger, S.L. (1986) Longitudinal data analysis using generalized linear models. Biometrika, 73 13–22. Zeger, S.L. and Liang, K.Y. (1986) Longitudinal data analysis for discrete and continuous outcomes. Biometrics, 42 121–130. data(warpbreaks) ## marginal analysis of random effects model for wool summary(gee(breaks ~ tension, id=wool, data=warpbreaks, corstr="exchangeable")) ## test for serial correlation in blocks summary (gee(breaks ~ tension, id=wool, data=warpbreaks, corstr="AR-M", Mv=1)) if(require(MASS)) { data(OME) ## not fully appropriate link for these data. (fm <- gee(cbind(Correct, Trials-Correct) ~ Loud + Age + OME, id = ID, data = OME, family = binomial, corstr = "exchangeable")) summary(fm) } For more information on customizing the embed code, read Embedding Snippets.
{"url":"https://rdrr.io/cran/gee/man/gee.html","timestamp":"2024-11-09T13:13:58Z","content_type":"text/html","content_length":"26123","record_id":"<urn:uuid:4056a3d4-68e7-44c0-b232-2aed00507f0b>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00021.warc.gz"}
des_modes man page on OpenMandriva [printable version] DES_MODES(7) OpenSSL DES_MODES(7) des_modes - the variants of DES and other crypto algorithms of OpenSSL Several crypto algorithms for OpenSSL can be used in a number of modes. Those are used for using block ciphers in a way similar to stream ciphers, among other things. Electronic Codebook Mode (ECB) Normally, this is found as the function algorithm_ecb_encrypt(). · 64 bits are enciphered at a time. · The order of the blocks can be rearranged without detection. · The same plaintext block always produces the same ciphertext block (for the same key) making it vulnerable to a 'dictionary attack'. · An error will only affect one ciphertext block. Cipher Block Chaining Mode (CBC) Normally, this is found as the function algorithm_cbc_encrypt(). Be aware that des_cbc_encrypt() is not really DES CBC (it does not update the IV); use des_ncbc_encrypt() instead. · a multiple of 64 bits are enciphered at a time. · The CBC mode produces the same ciphertext whenever the same plaintext is encrypted using the same key and starting variable. · The chaining operation makes the ciphertext blocks dependent on the current and all preceding plaintext blocks and therefore blocks can not be rearranged. · The use of different starting variables prevents the same plaintext enciphering to the same ciphertext. · An error will affect the current and the following ciphertext blocks. Cipher Feedback Mode (CFB) Normally, this is found as the function algorithm_cfb_encrypt(). · a number of bits (j) <= 64 are enciphered at a time. · The CFB mode produces the same ciphertext whenever the same plaintext is encrypted using the same key and starting variable. · The chaining operation makes the ciphertext variables dependent on the current and all preceding variables and therefore j-bit variables are chained together and can not be rearranged. · The use of different starting variables prevents the same plaintext enciphering to the same ciphertext. · The strength of the CFB mode depends on the size of k (maximal if j == k). In my implementation this is always the case. · Selection of a small value for j will require more cycles through the encipherment algorithm per unit of plaintext and thus cause greater processing overheads. · Only multiples of j bits can be enciphered. · An error will affect the current and the following ciphertext Output Feedback Mode (OFB) Normally, this is found as the function algorithm_ofb_encrypt(). · a number of bits (j) <= 64 are enciphered at a time. · The OFB mode produces the same ciphertext whenever the same plaintext enciphered using the same key and starting variable. More over, in the OFB mode the same key stream is produced when the same key and start variable are used. Consequently, for security reasons a specific start variable should be used only once for a given key. · The absence of chaining makes the OFB more vulnerable to specific · The use of different start variables values prevents the same plaintext enciphering to the same ciphertext, by producing different key streams. · Selection of a small value for j will require more cycles through the encipherment algorithm per unit of plaintext and thus cause greater processing overheads. · Only multiples of j bits can be enciphered. · OFB mode of operation does not extend ciphertext errors in the resultant plaintext output. Every bit error in the ciphertext causes only one bit to be in error in the deciphered plaintext. · OFB mode is not self-synchronizing. If the two operation of encipherment and decipherment get out of synchronism, the system needs to be re-initialized. · Each re-initialization should use a value of the start variable different from the start variable values used before with the same key. The reason for this is that an identical bit stream would be produced each time from the same parameters. This would be susceptible to a 'known plaintext' attack. Triple ECB Mode Normally, this is found as the function algorithm_ecb3_encrypt(). · Encrypt with key1, decrypt with key2 and encrypt with key3 again. · As for ECB encryption but increases the key length to 168 bits. There are theoretic attacks that can be used that make the effective key length 112 bits, but this attack also requires 2^56 blocks of memory, not very likely, even for the NSA. · If both keys are the same it is equivalent to encrypting once with just one key. · If the first and last key are the same, the key length is 112 bits. There are attacks that could reduce the effective key strength to only slightly more than 56 bits, but these require a lot of memory. · If all 3 keys are the same, this is effectively the same as normal ecb mode. Triple CBC Mode Normally, this is found as the function algorithm_ede3_cbc_encrypt(). · Encrypt with key1, decrypt with key2 and then encrypt with key3. · As for CBC encryption but increases the key length to 168 bits with the same restrictions as for triple ecb mode. This text was been written in large parts by Eric Young in his original documentation for SSLeay, the predecessor of OpenSSL. In turn, he attributed it to: AS 2805.5.2 Australian Standard Electronic funds transfer - Requirements for interfaces, Part 5.2: Modes of operation for an n-bit block cipher algorithm Appendix A blowfish(3), des(3), idea(3), rc2(3) 1.0.1i 2014-07-22 DES_MODES(7) List of man pages available for OpenMandriva Copyright (c) for man pages and the logo by the respective OS vendor. For those who want to learn more, the polarhome community provides shell access and support. [legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ] Polarhome, production since 1999. Member of Polarhome portal. Based on Fawad Halim's script.
{"url":"http://ia64.polarhome.com/service/man/?qf=des_modes&tf=2&of=OpenMandriva&sf=7","timestamp":"2024-11-05T12:26:49Z","content_type":"text/html","content_length":"22407","record_id":"<urn:uuid:8357a2e5-613f-4c9f-8520-bb53961cf49c>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00393.warc.gz"}
Algebra Rule 21 Converting a root of a root into a single root ```\sqrt[m]{\sqrt[n]{a}} = \sqrt[nm]{a}``` Once again, by working backwards from the value of these two expressions we can see why they are equal. If ``\sqrt[m]{\sqrt[n]{a}} = x``, then we can construct ``a`` out of combinations of ``x`` and see how the whole equation works. To make things simple, we'll start with given values of ``m`` and ``n``. If ``\sqrt[2]{\sqrt[3]{a}} = x``, then ``x = \sqrt[2]{x*x}``, which also means that ``\sqrt [2]{\sqrt[3]{(x*x)*(x*x)*(x*x)}} = x``. So ``a = (x*x)*(x*x)*(x*x) = x^6 = x^{mn}``. And happily, ``\sqrt[mn]{x^{mn}} = x`` by definition, so we have ``\sqrt[m]{\sqrt[n]{a}} = x = \sqrt[mn]{x^{mn}} ``. Our example is a specific case where ``m = 2`` and ``n = 3``, but since ``a`` will always be equal to ``x^{mn}``, the equation holds regardless of the values of ``m`` and ``n`` ```\sqrt[2]{\sqrt[3]{729}} = \sqrt[2]{9} = 3 = \sqrt[6]{729}``` « Previous Rule Next Rule »
{"url":"http://algebrarules.com/rule-21-converting-a-root-of-a-root-into-a-single-root","timestamp":"2024-11-13T18:03:43Z","content_type":"text/html","content_length":"16043","record_id":"<urn:uuid:e52a9ed4-f492-4ebe-ab35-cec34429da7c>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00081.warc.gz"}
Schering Bridge: Definition, Circuit Diagram, Explanation, Advantages A Schering Bridge is a bridge circuit used for measuring an unknown electrical capacitance and its dissipation factor. The dissipation factor of a capacitor is the the ratio of its resistance to its capacitive reactance. The Schering Bridge is basically a four-arm alternating-current (AC) bridge circuit whose measurement depends on balancing the loads on its arms Figure 1 below shows a diagram of the Schering Bridge. In the Schering Bridge above, the resistance values of resistors R1 and R2 are known, while the resistance value of resistor R3 is unknown. The capacitance values of C1 and C2 are also known, while the capacitance of C3 is the value being measured. To measure R3 and C3, the values of C2 and R2 are fixed, while the values of R1 and C1 are adjusted until the current through the ammeter between points A and B becomes zero. This happens when the voltages at points A and B are equal, in which case the bridge is said to be 'balanced'. When the bridge is balanced, Z1/C2 = R2/Z3, where Z1 is the impedance of R1 in parallel with C1 and Z3 is the impedance of R3 in series with C3. In an AC circuit that has a capacitor, the capacitor contributes a capacitive reactance to the impedance. When the bridge is balanced, the negative and positive reactive components are equal and cancel out, so Similarly, when the bridge is balanced, the purely resistive components are equal, so C2/C3 = R2/R1 or C3 = R1C2 / R2. Note that the balancing of a Schering Bridge is independent of frequency. Balance equation is independent of frequency Used for measuring the insulating properties of electrical cables and equipment’s
{"url":"https://www.brainkart.com/article/Schering-Bridge--Definition,-Circuit-Diagram,-Explanation,-Advantages_12739/","timestamp":"2024-11-13T04:50:29Z","content_type":"text/html","content_length":"38239","record_id":"<urn:uuid:c06aa533-b58c-4c22-b9d9-869a33cdf20c>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00757.warc.gz"}
Recognizing Value of All the Notes Used in the UK Question Video: Recognizing Value of All the Notes Used in the UK Mathematics Count the notes. How many £5 notes are there? How many £10 notes are there? How many £20 notes are there? Video Transcript Count the notes. How many five-pound notes are there? How many 10-pound notes are there? And how many 20-pound notes are there? In the picture, we can see lots of money. But there aren’t any coins here. These are all notes, and each note shows a different number of pounds. Now this question tests how good we are at recognizing these notes. We need to count how many five-pound notes there are, how many 10-pound notes there are, and also how many 20-pound notes there are. So let’s start by thinking about the five-pound notes. How do we know a five-pound note if we see one? A five-pound note is a sort of bluey-green color, but really importantly, it has the number five written on it. This is how we know it’s worth five pounds. Let’s go along each row of notes and look for five-pound notes. We could put a counter on top of each one we find. In the first row, there are one, two five-pound notes. There don’t seem to be any in the second row or the third row. But if we look in the bottom row, we can see another one. So that’s one, two, three five-pound notes altogether. Now we need to think about 10-pound notes. We know that the design on a 10-pound note is a sort of browny-orange color. But the thing that really tells us it’s worth 10 pounds is where we can see the number 10 on there. Let’s go hunting for 10-pound notes. There aren’t any in the first row, but if we look in the second row, we can see one, two 10-pound notes. If we quickly look in the other two rows, we can see that there aren’t any more 10-pound notes to be found. There are two 10-pound notes. Finally, we need to count the number of 20-pound notes. A 20-pound note has a sort of dark bluey-purple design to it. And it has the number 20, a two followed by a zero. This note is worth 20 pounds. Let’s count them: one, two, three, four, and then there’s one in the bottom row that makes five altogether. In this question, we use what we knew about five-pound, 10-pound, and 20-pound notes to recognize them. There are three five-pound notes, two 10-pound notes, and five 20-pound notes.
{"url":"https://www.nagwa.com/en/videos/945180735823/","timestamp":"2024-11-12T09:46:05Z","content_type":"text/html","content_length":"243494","record_id":"<urn:uuid:79f310ea-b9b3-4a3f-b9cd-ea24a361d404>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00470.warc.gz"}
Testing higher-order properties with QuickCheck Posted on February 24, 2020 I have just released two libraries to enhance QuickCheck for testing higher-order properties: quickcheck-higherorder and test-fun. This is a summary of their purpose and main features. For more details, refer to the README and the implementations of the respective packages. This project started from experiments to design laws for the mtl library. What makes a good law? I still don’t know the answer, but there is at least one sure sign of a bad law: find a counterexample! That’s precisely what property-based testing is useful for. As a byproduct, if you can’t find a counterexample after looking for it, that is some empirical evidence that the property is valid, especially if you expect counterexamples to be easy to find. Ideally we would write down a property, and get some feedback from running it. Of course, complex applications will require extra effort for worthwhile results. But I believe that, once we have our property, the cost of entry to just start running test cases can be reduced to zero, and that many applications may benefit from it. QuickCheck already offers a smooth user experience for testing simple “first-order properties”. quickcheck-higherorder extends that experience to higher-order properties. A higher-order property is a property quantifying over functions. For example: Vanilla QuickCheck is sufficient to test such properties, provided you know where to find the necessary utilities. Indeed, simply passing the above property to the quickCheck runner results in a type quickCheck tries to convert prop_bool to a Property, but that requires Bool -> Bool to be an instance of Show, which is of course absurd.^1 Instead, functions must be wrapped in the Fun type: prop_bool' :: Fun Bool Bool -> Bool -> Property prop_bool' (Fn f) x = f (f (f x)) === f x main :: IO () main = quickCheck prop_bool' -- OK! Compounded over many properties, this Fun/Fn boilerplate is repetitive. It becomes especially cumbersome when the functions are contained inside other data types. quickcheck-higherorder moves that cruft out of sight. The quickCheck' runner replaces the original quickCheck, and infers that (->) should be replaced with Fun. -- The first version prop_bool :: (Bool -> Bool) -> Bool -> Property prop_bool f x = f (f (f x)) === f x main :: IO () main = quickCheck' prop_bool -- OK! Data and its representation The general idea behind this is to distinguish the data that your application manipulates, from its representation that QuickCheck manipulates. The data can take any form, whatever is most convenient for the application, but its representation must be concrete enough so QuickCheck can randomly generate it, shrink it, and print it in the case of failure. Vanilla QuickCheck handles the simplest case, where the data is identical to its representation, and gives up as soon as the representation has a different type, requiring us to manually modify the property to make the representation of its input data explicit. This is certainly not a problem that can generally be automated away, but the UX here still has room for improvement. quickcheck-higherorder provides a new way to associate data to its representation, via a type class Constructible, which quickCheck' uses implicitly. class (Arbitrary (Repr a), Show (Repr a)) => Constructible a where type Repr a :: Type fromRepr :: Repr a -> a Notably, we no longer require a itself to be an instance of Arbitrary and Show. Instead, we put those constraints on an associated type Repr a, which is thus inferred implicitly whenever values of type a are quantified over. Testable equality Aiming to make properties higher-level, more declarative, the prop_bool property above can also be written like this: Where (:=:) is a simple constructor. That defers the choice of how to interpret the equation to the caller of prop_bool, leaving the above specification free of such operational details. Behind the scenes, this exercises a new type class for testable equality,^2 TestEq, turning equality into a first-class concept even for higher-order data (the main examples being functions and infinite lists). For more details, see the README of quickcheck-higherorder. Testable higher-order functions QuickCheck offers a Fun type to express properties of arbitrary functions.^3 However, Fun is limited to first-order functions. An example of type that cannot be represented is Cont. The library test-fun implements a generalization of Fun which can represent higher-order functions. Any order! It’s a very simple idea at its core, but it took quite a few iterations to get the design right. The end result is a lot of fun. The implementation exhibits the following characteristics, which are not obvious a priori: • like in QuickCheck’s version, the type of those testable functions is a single GADT, i.e., a closed type, whereas an open design might seem more natural to account for user-defined types of • the core functions to apply, shrink, and print testable functions impose no constraints on their domains; • test-fun doesn’t explicitly make use of randomness, in fact, it doesn’t even depend on QuickCheck! The library is parameterized by a functor gen, and almost all of the code only depends on it being an Applicative functor. There is (basically) just one function (cogenFun) with a Monad constraint and with a random generator as an argument. As a consequence, test-fun can be reused entirely to work with Hedgehog. However, unlike with QuickCheck, some significant plumbing is required, which is work in progress. test-fun cannot just be specialized to Hedgehog’s Gen monad; it will only work with QuickCheck’s Gen,^4 so we currently have to break into Hedgehog’s internals to build a compatible version of the “right” Gen. test-fun implements core functionality for the internals of libraries like quickcheck-higherorder. Users are thus expected to only depend directly on quickcheck-higherorder (or the WIP hedgehog-higherorder linked above). Generators as traversals test-fun only requires an Applicative constraint in most cases, because intuitively a testable function has a fixed “shape”: we represent a function by a big table mapping every input to an output. To generate a random function, we can generate one output independently for each input, collect them together using (<*>), and build a table purely using (<$>). However this view of “functions as tables” does not extend to higher-order functions, which may only make finite observations of their infinite inputs. A more general approach is to represent functions as decision trees over their inputs. “Function as tables” is the special case where those trees are maximal, such that there is a one-to-one correspondence between leaves and inputs. However, maximal trees don’t always exist. Then a random generator must preemptively terminate trees, and that requires stronger constraints such as Monad (intermediate ones like Alternative or Selective might be worth considering too). For more details, see the README of test-fun. These libraries are already used extensively in my project checkers-mtl, which is where most of the code originated from. One future direction on my mind is to port this to Coq, as part of the QuickChick project. I’m curious about the challenges involved in making the implementation provably total, and in formalizing the correctness of testing higher-order properties. I’m always looking for opportunities to make testing as easy as possible. I’d love to hear use cases for these libraries you can come up with!
{"url":"https://blog.poisson.chat/posts/2020-02-24-quickcheck-higherorder.html","timestamp":"2024-11-10T12:10:04Z","content_type":"application/xhtml+xml","content_length":"18858","record_id":"<urn:uuid:5cc22cc5-5551-4158-810e-6ee888800b20>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00893.warc.gz"}
An Etymological Dictionary of Astronomy and Astrophysics چوناییده، چونامند cunâyide, cunâmand Fr.: qualifié Having the qualities, accomplishments, etc., that fit a person for some function, office, or the like (Dictionary.com). Past participle of → qualify. ۱) چوناییده بودن، چونامند بودن؛ ۲) چوناییدن، چونامند کردن 1) cunâyide budan, cunâmand budan; 2) cunâyidan, cunâmand kardan Fr.: 1) se qualifier; 2) qualifier 1) Be entitled to a particular benefit or privilege by fulfilling a necessary condition; become officially recognized as a practitioner of a particular profession or activity, typically by undertaking a course and passing examinations. 2) Officially recognize or establish (someone) as a practitioner of a particular profession or activity (OxfordDictionaries.com). → quality; → -fy. Fr.: qualitatif Pertaining to or concerned with quality or qualities. From L.L. qualitativus, from qualitat- + -ivus, → -ive. Cunik, from cuni, → quality, + -ik, → -ic. چونا، چونی cunâ (#), cuni (#) Fr.: qualité A distinguishing characteristic, property, or attribute of something. 2) → image quality. 3) → sound quality. M.E. qualite, from O.Fr. qualite (Fr. qualité), from L. qualitas, from qual(is) "of what sort?" + → -ity. Cunâ, cuni, from Mid.Pers. cigôn "how?," cigônêh "nature, character," O.Pers/Av. ci- "what, any," collateral stem to ka- "who?, what?" (cf. Skt. ka-; Gk. po-; L. quo-; E. what, who; PIE *qwos/*qwes) + Av. gaona- "color" (Mid.Pers. gônak "kind, species"). kuântomhâ (#) Fr.: quanta Plural of → quantum. L. plural of quantum. Kuântomhâ, from kuântom, → quantum + -hâ plural suffix. Fr.: quantification The fact or process of quantifying. Verbal noun of → quantify. Fr.: quantificateur 1) 1) A word that indicates the quantity of something. 2) Math.: A phrase in a logical expression that somehow specifies the quantity of variables. In particular either of the phrases "for all" (written symbolically as ∀) and "there exists" (∃). 3) In → predicate logic, a symbol that applies to, or binds, → variables which represent the → arguments of → predicates. See also → existential quantifier and → universal quantifier. In → first-order logic theses variables must range over → individuals. In higher-order logics they may range over predicates. Agent noun of → quantify Fr.: quantifier 1) To express as a number or amount. 2) In predicate logic: To express by a → symbol how many of the → individuals have the property in common. M.L. quantificare, from to L. quant(us) "how much?" + -ificare "-ify." Candâyidan infinitive of candâ, → quantity + -idan. Fr.: quantitatif Relating to, measuring, or measured by the quantity of something rather than its → quality (OxfordDictionaries.com). From L.L. quantitativus, from quanitat- + -ivus "-ive." quantitative analysis آنالس ِچندایی ânâlas-e candâyi Fr.: analyse quantitative The analysis of a chemical sample to derive its precise percentage composition in terms of elements, radicals, or compounds. → quantitative; → analysis. چندا، چندی candâ (#), candi (#) Fr.: quantité The property of magnitude. An entity having magnitude, size, extent, or amount. M.E., from rom O.Fr. quantite (Fr. quantité), from L. quantitatem (nominative quantitas), from quant(us) "how much?" + -itas, → -ity. Candâ, candi "quantity," Mid.Pers. candih "amount, quantity," from cand "how many, how much; so many, much;" O.Pers. yāvā "as long as;" Av. yauuant- [adj.] "how great?, how much?, how many?," yauuat [adv.] "as much as, as far as;" cf. Skt. yāvant- "how big, how much;" Gk. heos "as long as, until." kuântomeš (#) Fr.: quantification 1) The procedure of restricting a continuous quantity to certain discrete values. 2) Physics: The procedure of deriving the quantum-mechanical laws of a system from its corresponding classical laws. Verbal noun of → quantize. kuântomidan (#) Fr.: quantifier Math.: To restrict a variable quantity to discrete values rather than to a continuous set of values. Physics: To change the description of a physical system from classical to quantum-mechanical, usually resulting in discrete values for observable quantities, as energy or angular momentum. From quant(um) + → -ize. From kuântom, → quantum, + -idan infinitive suffix. kuântomidé (#) Fr.: quantifié 1) Capable of existing in only one of several states. 2) Of or pertaining to discrete values for → observable quantities. P.p. of → quantize. kuântomandé (#) Fr.: quantificateur A device with a limited number of possible output values hat can translate an incoming signal into these values or codes for outputting. Agent noun of → quantize. kuântom (#) Fr.: quantum The smallest amount of energy that can be absorbed or radiated by matter at a specified frequency (plural quanta). It is a → discrete quantity of energy hν associated with a wave of frequency ν, where h represents the → Planck's constant. Quantum "a particular amount," from L. quantum "how much," neuter singular of quantus "how great." Introduced in physics by Max Planck (1858-1947) in 1900. quantum censorship سانسور ِکوآنتومی sânsur-e kuântomi Fr.: censure quantique A concept whereby properties of objects vary according to the energy with which they are probed. An atomic system in its → ground state tends to remain as it is if little energy is fed in, betraying no evidence of its internal structure. Only when it is excited into a higher state do complexities emerge. This is the essence of quantum censorship. Thus, below an energy threshold, atoms appear to be impenetrable. Above it, their components can be exposed (F. Wilczek, 2013, Nature 498, 31). → quantum; censorship, from censor, from M.Fr. censor and directly from L. censor "a Romain magistrate who kept the register or census of the citizens, and supervised morals," from censere "to appraise, value, judge," from PIE root *kens- "to speak solemnly, announce;" cf. Av. səngh- (sanh-) "to declare, explain;" Pers. soxan "word, speech;" Skt. śams- "to praise, recite." quantum chromodynamics رنگ-توانیک ِکوآنتومی rangtavânik-e kuântomi Fr.: chromodynamique quantique The → quantum field theory that deals with the → strong interaction and the structure of elementary particles in the framework of → quantum theory. The cohesive attraction between the → quarks, that constitute → hadrons, involves the participation of three particles. Each of these particles is assigned a different → color "charge." The existence of these "charges" requires a multiplicity of different messenger particles to communicate the interaction and glue the quarks together. These messengers are called → gluons and there are eight different types. → quantum; → chromodynamics quantum coherence همدوسی ِکوآنتومی hamdusi-ye kuantomi Fr.: cohérence quantique In quantum physics, a situation where an object's wave property is split in two, and the two waves coherently interfere with each other in such a way as to form a single state that is a superposition of the two states. This phenomenon is based on the fact that atomic particles have wave-like properties. Quantum coherence is in many ways similar to → quantum entanglement, which involves the shared states of two quantum particles instead of two quantum waves of a single particle. Quantum coherence and quantum entanglement are both rooted in the → superposition principle. → quantum; → coherence. quantum computer رایانگر ِکوآنتومی râyângar-e kuântomi Fr.: ordinateur quantique A type of computer, as yet hypothetical, that uses quantum mechanical laws, such as the → superposition principle and the → quantum entanglement, to perform calculations based on the behavior of particles at the → subatomic level. A quantum computer would gain enormous processing power through the ability to be in multiple states, and to perform tasks using all possible permutations → quantum; → computer.
{"url":"https://dictionary.obspm.fr/index.php?showAll=1&&search=Q&&formSearchTextfield=&&page=1","timestamp":"2024-11-14T03:56:31Z","content_type":"text/html","content_length":"31377","record_id":"<urn:uuid:643fc2a1-9ad6-4602-9cb7-01cc9be23feb>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00206.warc.gz"}
Once of the important things to do when you play around with electronics is to make sure that you are safe-guarding your electronic components from over current. The most common way to do that is by adding a resistor in series. You can calculate the value of the resistor needed for safe operation by using Ohm’s law. But, when I was getting started with electronics and Arduino, I always found it difficult to understand how a particular value of resistor is recommended in tutorials, even though you will be getting a different value when you try to apply Ohm’s law yourself. It took me quite sometime to understand the logic behind it and I thought of explaining it here, so that it is helpful for people who are also just getting started. Protecting Arduino Pin from over current Let’s consider the simple Blink example in Arduino. This is most probably the first program you might have tried when you are getting started with Arduino. If you look at the circuit, you will find that you are asked to connect a 220 Ohm resistor in series to protect the Arduino pin and the LED. But it is not mentioned how or why this value is chosen. Now let’s try to calculate the value ourself. You need to be familiar with Ohm’s law, if you are not, then read this excellent tutorial by Evil Mad Scientist, which explains the whole concept very If you have read the above tutorial, you will now know that a typical red LED has a voltage drop of 1.8V and a current of about 25mA. The Arduino Pin has an output voltage of 5V. Let’s use these values in our calculation. V = (power source) – (voltage drop) = 5V – 1.8V = 3.2 V I = 25 mA We need to find R. R = V/I Substituting the values, you will get R = 3.2/0.025 = 128 Ohms. We need to use 128 Ohm resistor, but the tutorial asks us to use 220 Ohm, which is almost double. Practical easiness over theoretical correctness It took me quite sometime to figure out why 220 Ohm is recommended over 128 Ohm. It is an apt example of choosing practical easiness over theoretical correctness. Engineers being practical people always prefer practical solutions over theoretical ones 😉 If you try to buy resistors from a local hobby shop, you will find that the resistors are available in the following values. { 100, 220, 470, 1000, 2200, 4700, 10000 } These are the standard values and are easier to find rather than other values. If you look at the values, you will find that 100 Ohm is less than 128 Ohm (that we calculated) and is quite risky. The next higher easily available value is 220 Ohm. When you substitute R=220 in the equation I=V/R I = V/R I = 3.2/220 ~= 14mA You will get the value of the current to be around 14mA. LED’s operate between 10-25mA. Also, since LED’s are non-linear devices, the difference in the current from 14mA to 25mA doesn’t necessary mean a proportional difference in the brightness. In most cases, you may not even be able to tell the difference. So, choosing 220 Ohm instead of 128 Ohm is purely because of practical easiness. If you have bought a getting started kit with Arduino or a pack of assorted set of resistors, you are more likely to find a 220 Ohm rather than 128 Ohm. As, I mentioned before, the Blink example is most probably the first circuit that people are going to try and if you ask them to use a non-common resistor, then most probably they are not going to find it and might stop right there itself, instead of going forward. And that’s the reason why they have recommended 220 Ohm. Arduino, being a platform for beginners, it is perfectly fine that they tried to simplify things for you. But once you start to grasp things, you might have to dig deeper to understand why a particular circuit or sketch is build in a certain way. Happy hacking 🙂
{"url":"https://hardwarefun.com/2013/02","timestamp":"2024-11-10T15:56:54Z","content_type":"text/html","content_length":"24028","record_id":"<urn:uuid:0c0e3382-eb7b-4270-b1d0-629d8519e05c>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00704.warc.gz"}
Break-Even Point | Example & Definition | InvestingAnswers Break-Even Point Definition In accounting, economics, and business, the break-even point is the point at which cost equals revenue (indicating that there is neither profit nor loss). At this point in time, all expenses have been accounted for, so the product, investment, or business begins to generate profit. The concept of “breaking even” has multiple applications, so the definition of break-even point varies, depending on the situation it is used. Break-Even Point for Investments The break-even point of an investment occurs when the market price of the investment equals its original cost. At this point, the investor has neither gained nor lost money. If the price of the investment goes above the market price, the investment becomes profitable. However, if the price of the investment falls below the market price, the investment is not profitable. Break-Even Point for Businesses The break-even point for businesses occurs when revenue and expenses are equal. Alternatively, it is the amount of revenue that a business needs to earn in order to cover both its fixed and variable Before reaching its break-even point, the business will operate at a loss since its revenue doesn’t cover all of its costs. After reaching its break-even point, all costs have been accounted for, so it is able to start generating profit. Why Is Break-Even Point Important? The break-even point is an essential metric that can help determine whether an investment, product, or business is financially viable. It highlights the bare minimum performance required to become profitable, helping the investor or company make important decisions. Common applications include: • Identifying the point at which the business will begin to generate profit • Developing cost structures • Identifying opportunities for promotions and discounts • Establishing production goals • Determining the optimal sales mix • Revealing how much revenue must be earned to cover all expenses • Determining optimal price points • Calculating the profitability of a product or investment • Determining how fluctuations in price or volume of sales will impact profit • Identifying sales volume needed to hit target profit • Establishing how far sales can decline before losses are incurred How to Calculate Break-Even Point The break-even point occurs when a company’s revenue is equal to its expenses, so the first step is to identify costs and selling price. Fixed costs: costs that are independent of the number of units produced (e.g. rent, interest) Variable costs: costs that are dependent on the number of units produced (e.g. raw materials, hourly wages) Selling price: the price the product is sold for Using this data, the break-even point is calculated by dividing fixed costs by the contribution margin (selling price - the variable cost per unit). The resulting number represents the number of units the company needs to sell in order for it to break even. Any units below this number will be sold at a loss, and any units above this number will generate profit. Until the break-even point is reached, the company’s expenses will be greater than its revenue (so it will be operating at a loss). What Causes the Break-Even Point to Increase? A higher break-even point means that a company must generate more revenue in order to cover its costs. There are a number of reasons why a break-even point might increase: • Increase in fixed cost • Increase in variable cost • Decrease in selling price • A change in sales mix (proportion of each product sold to total sales) Break-Even Point Formula You can use the following formula to calculate the break-even point: Break-Even Point Example Bob is considering opening a bakery that will sell a single type of bread. He is working on a business model and wants to discover whether this venture is financially viable – and when it would become profitable. Here is a breakdown of his financials: Fixed costs (monthly) Variable costs (per loaf) Selling price (per loaf) Rent $2,500 Flour $0.50 Insurance $250 Water $0.25 Utilities $250 Salt $0.10 Advertising $500 Yeast $0.15 Total $3,500 Total $1 Total $5 Using the break-even point formula above, he can calculate how many loaves the bakery will need to sell each month in order to cover all expenses: In order to break even, Bob’s bakery would need to sell 875 loaves of bread per month. If it sells fewer than 875 loaves, the business’ revenue would not cover its expenses, so Bob would lose money. However, if the bakery sells more than 875 loaves per month, it would earn enough revenue to cover all costs and generate profit. Note: Many businesses are not profitable from the beginning because it takes time to attract customers, reduce costs, etc. Because of this, many businesses struggle to reach the break-even point before they run out of capital. In Bob’s case, he could try to negotiate his fixed costs or find new suppliers for his variable costs, but there are no guarantees. Every situation is different and there are a number of ways to survive before a business becomes profitable.
{"url":"https://investinganswers.com/dictionary/b/break-even-point","timestamp":"2024-11-05T04:19:04Z","content_type":"text/html","content_length":"78434","record_id":"<urn:uuid:5d7472ec-9996-4e81-8382-4e15a59d9ad6>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00797.warc.gz"}
Our users: Congratulations & Thanks for this wonderful piece of software. It is both challenging and fun. M.V., Texas Math has never been easy for me to grasp but this program makes it easy to understand. Thanks! John Tusack, MI Be it Step by Step explanation for an equation or graphical representation, you get it all. I just love to use this due to the flexibility it provides while studying. Alisha Matthews, NC I love it. It is much easier to move around and the colors are easier on the eyes. C.B., Oklahoma Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among Search phrases used on 2013-10-16: • singapore additional maths online tutorial • ti 83 math lesson • Quadric Puzzle • calculate balancing equations • free online algebra for dummies • math probloms • ALEKS cheats • online trinomial calculator • quadratic expression solver • prentice hall algebra 1 operations with scientific notation worksheets • slope worksheets • solving expression calculator • simplify functions calculator • solving equations with like terms • how to solve a scatter plot problem • texas insteruments ti-83 plus factorial button • two-step equations • ti83 partial fraction decomposition • convert decimal to fraction using excel • college that ofter the program • slope intercept on a graphing calculator • free divion worksheets • ratio worksheets beginner • TAKS practice 6th grade math • sample advanced algebra test items • Texas math Glencoe Mac 2 • TI-83 how to find slope • Equations Worksheet • online decimal to mixed fraction converter • use algorithm to solve quatratic equations equations • ti-89 simulator • 6th graders' math dictionary • 9th grade math review online • previous accounting question papers • algebra 2 cheat calculator • calculate ellipses • writing linear equation worksheet • Simplifying radicals calculator • maple "time derivative" freefall • modern algebra tutorial • matlab solve for variable in equation • highest factor ever maths • "5A elements" animations • fraction word problems and answers worksheets • square roots and exponents • third order homogeneous ODE • answers for houghton mifflin algebra work • ti 89 online graphing calculator • grade seven math worksheets+canada • powerpoint for dummies on solving algebraic addition and subtraction equations • converting to square root on calculator • Multiplying & Dividing Fractions TEST • easy 7th grade math dimensional analysis help • adding mixed numbers worksheet • fifth grade variable expression • use computer based algorithm to solve quadratic equations • ti 83 exponents complex • Download free algebra helper • distributive property elementary worksheets • algebra 2: Simplifiying Rational Expressions • quadratic equation roots • Finding the roots of a third order quadratic • Free Printable Math Sheets • mcdougal littell middle school answers • java determine if input is palindrome integer • prealgabra • lcd worksheet • utility formula quadratic • free ti-89 calculator games • prentice hall prealgebra answers • easy way to find area algebra • free online graphing calculator ti 83 to use • math trivia • gauss-jordan worksheets • solve my algebra equations • cubed root of 5/4 • Henderson Hasselback equation calculation free base • how to solve three mathematical equations using cross multiplication rule • online quadrilaterals test paper(maths) • how to solve scale factor problems • "graphing tools" "Taylor polynomial" • C language algebraic solver • what is the difference between the highest common factor and the lowest commen multiple? • ebook math algebra free • best software for algebra 2 • 5th grade pratice exam • parabola calculator • Printable Ged Practice Tests with Answers • Mixed number to decimal converter • algebra tiles factoring worksheet • fundamentals of cost accounting free • how to find fourth root • fractional equations with complex fractions • How to program Ti-83 plus Quadratic Formula • permutation word problems worksheet • math worksheet like terms • boolean algebra solver • curved line equations
{"url":"https://mathworkorange.com/math-help-calculator/trigonometry/9th-grade-algebra-help-free.html","timestamp":"2024-11-03T04:24:18Z","content_type":"text/html","content_length":"87510","record_id":"<urn:uuid:cc05857a-9f21-4889-9ffc-0aa99cbe89cb>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00675.warc.gz"}
Four color theorem and convex relaxation for image segmentation with any number of regions Image segmentation is an essential problem in imaging science. One of the most successful segmentation models is the piecewise constant Mumford-Shah minimization model. This minimization problem is however difficult to carry out, mainly due to the non-convexity of the energy. Recent advances based on convex relaxation methods are capable of estimating almost perfectly the geometry of the regions to be segmented when the mean intensity and the number of segmented regions are known a priori. The next important challenge is to provide a tight approximation of the optimal geometry, mean intensity and the number of regions simultaneously while keeping the computational time and memory usage reasonable. In this work, we propose a new algorithm that combines convex relaxation methods with the four color theorem to deal with the unsupervised segmentation problem. More precisely, the proposed algorithm can segment any a priori unknown number of regions with only four intensity functions and four indicator ("labeling") functions. The number of regions in our segmentation model is decided by one parameter that controls the regularization strength of the geometry, i.e., the total length of the boundary of all the regions. The segmented image function can take as many constant values as needed. • Convex relaxation method • Four color theorem • Mumford-Shah model • Unsupervised segmentation ASJC Scopus subject areas • Analysis • Modeling and Simulation • Discrete Mathematics and Combinatorics • Control and Optimization Dive into the research topics of 'Four color theorem and convex relaxation for image segmentation with any number of regions'. Together they form a unique fingerprint.
{"url":"https://faculty.kaust.edu.sa/en/publications/four-color-theorem-and-convex-relaxation-for-image-segmentation-w","timestamp":"2024-11-11T03:43:12Z","content_type":"text/html","content_length":"55731","record_id":"<urn:uuid:4c74aac2-06be-4cdc-8e28-ba096dac755b>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00465.warc.gz"}
Thoughts on the M4 Conference I had the opportunity to attend the M4 Conference held last week in NYC, which focused on the results of the recent M4 forecasting competition, as well as more generally on the state of the art in time series forecasting. In this post, I plan to summarize some of the key ideas that were presented at the conference and point out some of the thoughts that have occurred to me since. There were a number of excellent speakers whose key points (from my perspective) I summarize very briefly later on in this blog, with the standout ones for me being: • Slawek Smyl (winner of the competition with his “hybrid” method) • Spyros Makridakis (M competitions) • Nassim Taleb • Pablo Montero-Manso (representing the runner up team in the competition with a boosting meta-learning method) • Andrea Pasqua The rest of this post will discuss: • Big Ideas of the M4 Conference • Summaries of some of the talks In a follow up post I hope to discuss what actuaries can learn from the M4 competition. The Big Ideas of the M4 Conference There were several recurring themes at the conference that were addressed several times by the speakers. Of these, the one that came up the most often was the difference between statistics and machine learning. Stats vs ML It was fascinating to see the back and forth between the speakers and the audience on exactly what defines machine learning, and how this is different from statistics. Two of the different viewpoints • Statistical methods generally do not learn across different time series and datasets, whereas ML methods do. (This first perspective made sense from the perspective that most methods used for time series forecasting focus on the univariate case, i.e. where there is only one sequence, and techniques to leverage information across series are newer in this field (although obviously not a new concept in more traditional applications of statistics.) • There is no difference between statistics and ML, and in fact neural networks are a generalizations of GLMs, which are a basic statistical tool, in other words, the distinction is arbitrary. Interestingly, there was also not much consensus on whether the field of forecasting should be classified as a traditional statistical discipline or not. One good point that was made is that one of the basic time series methods – exponential smoothing – was always used as an algorithm, until statistical justification in the state-space framework was given by Rob Hyndman et al. One amusing debate focussed on whether Slawek’s method was in fact a statistical or machine learning approach, with different participants arguing for their perspectives, and being somewhat averse to the idea of a hybrid approach. This carried on, until Slawek himself was asked to clarify, at which point he confirmed that his method is a “hybrid” of statistical and machine learning approaches. My perspective is that some of these issues can be tied up quite neatly using the distinction between prediction and inference given by Shmueli (2010). A significant part of statistical practice is focussed on defining models and then working out whether or not the observed data could have been generated by the model, and, within this framework, one generally does not have concepts such as out-of-sample predictive accuracy. Machine learning, on the other hand, focuses on achieving good out-of-sample performance of models, whether these have been specified using some stochastic data generating procedure, or on an algorithmic basis. From this perspective, the field of forecasting is not a traditional statistical discipline, as the focus is on prediction! A recurring theme of the M competitions is that more complex models are usually outperformed by simple methods, for example, in the original M1 competition it was shown that exponential smoothing was better than ARIMA models. In the M4 competition, this became much more nuanced. One the one hand, “vanilla” machine learning techniques performed poorly, and worse than the benchmark, mirroring the findings in Makridakis, Spiliotis and Assimakopoulos (2018). On the other hand, the winners of the M4 competition used relatively more complex machine learning methods to great success. The difference seems to be that the complexity of the methods is in how they learn to generalize across time series (Slawek’s LSTM model and Paulo’s meta-learning algorithm), instead of trying to apply especially sophisticated methods to single time series. Triumph of Deep Learning As I have written about several times on this blog, the big advantage of deep learning over traditional machine learning approaches is that feature engineering gets performed automatically (i.e. this is the paradigm of representation learning, in that the model learns the features), and therefore, when dealing with large and very complex datasets, suitable neural network architectures can provide a massive performance boost over other approaches. I think this was clearly part of the “secret sauce” of Slawek’s winning solution, in that he very neatly specified a neural network combined with exponential smoothing, thus obviating the need to try derive features from each time series. This is in contrast to the runner-up solution presented by Pablo, which involved a substantial feature engineering step, in which many features were calculated for each time-series, after which a boosted tree model was fit on these features to work out how to weight the various time series methods. More to learn Although forecasting is not a new field, it seemed to me that many participants at the conference felt that there is much more to learn to advance the state of the art of forecasting, especially as machine learning methods get adapted to time series forecasting. The amazing and unanticipated success of Slawek’s hybrid method will no doubt lead many researchers to try similar methods on other This also manifested in the advance detail given on the upcoming M5 competition, which is going to focus on the role of explanatory variables in forecasting time series, as well as feature online learning as more data become available. I think many people felt that the techniques incorporating explanatory variables are not yet optimal and represent an opportunity to advance the state of the Ensembling of methods A famous finding in the forecasting literature is that combinations of methods usually do better than single methods, and that held true in the M4 competition. Slawek’s winning approach consisted of an ensemble of LSTM models (I discuss the very smart idea of a so-called Mixture of Specialists later) and Paulo’s method used a boosting algorithm to assign weights to different simple methods, which were then combined to produce the final forecasts. Summaries of talks Here are some summaries of my favourite talks of the conference. Slawek Smyl (Uber Technologies): A Hybrid Approach to Forecasting Slawek won the M4 competition by a large margin over the next best entry. His method, described in a short note here, essentially did two things: • Firstly, allow the neural net to learn optimal coefficients of the Holt-Winters algorithm which were then used to normalize each time series • Secondly, forecast the normalized series using the neural net and then restore the series using the Holt-Winters parameters The network design was a stack of various types of Long Short Term Memory cells (with skip connections and dilation). Slawek also used ensembling at several levels to produce the forecast. I found one ensembling method which he proposed to be particularly interesting, the Ensemble of Specialists, which is described in more detail here. Basically, the idea is to take several of the same neural net architectures and allow them to train for a single epoch on some of the training data. Then, allocate each time series to the top-2 neural nets and repeat both steps until the validation error increases. Once the nets are trained, one applies different ensemble methods to derive the final forecasts. This seems like a very smart way of ensuring optimal performance on all types of series – in my own research, I have encountered situations when neural nets trained to a global optimum do not perform as well as would be expected on some time series and I am excited to try out this approach. Spyros Makridakis (University of Nicosia): The contributions of the M4 Competition to the Theory and Practice of Forecasting The slides have been made available here. What stood out most for me about Spyros’ talk was the focus on improving the state of the art of time series forecasting using hard evidence, and that seems to be the key theme running throughout his work on the M competitions and even before. As easy as it might be to favour a method based on how pleasing it is theoretically, the approach during the M competitions has been simply to check what works, and what doesn’t on out of sample error. This created what seems to be a huge amount of work in the M4 competition, in that Spyros and his team have replicated every submission (even those that take upwards of a month to run in full!) and I admire the dedication to advancing the state of the art! Some of the major findings that Spyros discussed are: • Improving accuracy via combining methods • Superiority of Slawek’s hybrid method • The improved precision of prediction intervals in Slawek’s and Pablo’s methods – these had a coverage ratio very close to the required 95% • Increased complexity, as measured by compute time, led to increased accuracy, which I think is a first for the M competitions. • Learning across time series in the winning methods • Poor performance of pure ML methods, which was attributed to these methods overfitting on the univariate time series i.e. not learning across series Spyros then ended with two challenges where improvement is needed – improving the measurement of uncertainty (where there is great potential for ML/DL methods) and improving explanatory models of time series. Nassim Taleb (New York University): Forecasting and Uncertainty: The Challenge of Fat Tailedness I enjoyed hearing Nassim explain some of his ideas in the context of forecasting. My key takeaway here was that when forecasting, one might not be as interested in the underlying random variable being forecast, call it x, but rather the payoff function of x, which is f(x). The payoff function can be manipulated in various ways by taking positions against the underlying x, for example, one could hedge out tail risks, and therefore Nassim was effectively offering a way of dealing with uncertainty in x, which is manipulate your payoffs so that you are not hurt, and ideally gain, from the parts of x that you do not know about or are at most risk from. One interesting connection that he made was between the way options traders have always approximated payoff functions using a European options, which effectively comes down to function approximation using the ReLu activation in deep learning. Andrea Pasqua (Data Science Manager, Uber): Forecasting at Uber: Machine Learning Approaches Andrea’s talk covered how time series forecasting is done at Uber, with their own set of interesting and challenging issues, such as a huge number of series to forecast, dealing with extreme events, and the cold-start problem when services are launched in a new city. He gave a very nice walkthrough of how Uber arrived at the solutions currently in production, by going through each stage of model choice and development. It seems as if this team has benefited from Quantile Random Forests and I plan to read up more about these. It was refreshing to see how approachable the speakers at the M4 conference were, and how willing the winners of the competition were to share of their expertise and knowledge. The organizers of the conference put together a great event and well done to them! In the next post I hope to discuss some of what I believe the actuarial profession could learn from the advances in the state of the art of forecasting that were shown at the M4 conference. Makridakis, S., E. Spiliotis and V. Assimakopoulos. 2018. “Statistical and Machine Learning forecasting methods: Concerns and ways forward”, PLOS ONE 13(3):e0194889. Shmueli, G. 2010. “To explain or to predict?”, Statistical Science:289-310. 4 thoughts on “Thoughts on the M4 Conference” 3. Good article.On what sample do the organizers evaluate the accuracy of the forecasts? Do they have hidden sample for that? I saw in their site that train and test sample was made available to contest participants. 1. The test data was only released after the competition ended. Quite a debate currently going on on Twitter about this… This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"url":"https://ronaldrichman.co.za/2018/12/17/thoughts-on-the-m4-conference/","timestamp":"2024-11-14T10:48:51Z","content_type":"text/html","content_length":"74658","record_id":"<urn:uuid:bd1dbfcb-c92b-456d-a0ef-479cb5bc2a04>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00407.warc.gz"}
That Looks Good Tl;dr — This demo app helps you decide when you’re hungry but can’t decide on what to eat. Inspired by Tinder, on your phone you swipe right on dishes that look good, and swipe left on dishes that don’t. It’s written in Typescript and React, with a full GraphQL AWS Lambda serverless backend using Apollo server, and login and authentication handled with Auth0. I deployed the front end with Vercel (formerly Zeit) and back end with Serverless. Integration and unit tests are written in Jest. The photos are sourced from Unsplash. See the code and more details about how it works This was a fun app to write, and it was my first app written in Typescript. I had wanted to start learning Typescript, and after reading some basic documentation, I figured the best way to learn would be to actually use it. I thought the Tinder-like swipe functionality was going to be difficult to implement, but after I figured out how to implement basic dragging (which I wrote a custom React hook useDrag to do), the rotation was basically just one extra line of math: // DishCard.tsx transform: ` translate(${deltaX + transition.vx}px, ${deltaY + transition.vy}px) maxAngle * Math.tanh((deltaX + transition.vx) / (window.innerWidth / 2)) }deg)`, There’s a lot going on in that one line, so let’s unpack it. The amount of rotation depends on how much you’ve dragged the image card horizontally, and in particular the rotation angle is calculated using the hyperbolic tangent function tanh (read as “tanch” [IPA: tæntʃ] which rhymes with “blanch”). The total amount you’ve dragged is given by deltaX + transition.vx, where deltaX is the amount you actually drag, and transition.vx is an amount related to the swiping animation when you like or dislike a dish (and not particularly important for this discussion). I wanted the rotation to saturate when you swipe a distance of half the screen width, so I divided deltaX + transition.vx by window.innerWidth / 2. This works because tanh(1) is approximately equal to 0.76, so that when deltaX + transition.vx equals window.innerWidth / 2, the amount of rotation is 76% of the max rotation angle (discussed below). Moreover, for small values of its argument, tanh changes quickly, while at larger values it changes more slowly (take a look at the graph of tanh at the link above), which is exactly what I wanted. Finally, I multiply the whole thing by maxAngle since tanh approaches 1 for large values of its argument (and -1 for large, negative values). I set maxAngle to be 30 degrees.
{"url":"https://brandonling.dev/that-looks-good/","timestamp":"2024-11-07T18:18:10Z","content_type":"text/html","content_length":"163266","record_id":"<urn:uuid:23ca205b-1fc7-4fd6-82d1-12a5489e90d9>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00147.warc.gz"}
Chapter 10: Comparing Two or More Means by Analysing Variances: ANOVA 1. Which of the following is a pooled variance estimate that constitutes the denominator of an F-ratio? [TY10.1] 1. The between-cells mean square (MS[B]). 2. The mean of the sampling distribution. 3. The sum of the deviations from the grand mean. 4. The grand mean. 5. The within-cells mean square (MS[W]). 2. Which of the following statements is true? [TY10.2] 1. In one-way ANOVA the total sum of squares comprises two main sources of variance: within-groups variance and between-groups variance. Each has the same number of degrees of freedom. 2. In one-way ANOVA the total sum of squares comprises two main sources of variance: within-groups variance and between-groups variance. Each has its own number of degrees of freedom. 3. In one-way ANOVA the total sum of squares comprises three main sources of variance: within-groups variance, between-groups variance and error variance. Each has the same number of degrees of 4. In one-way ANOVA the total sum of squares comprises three main sources of variance: within-groups variance, between-groups variance and information variance. Each has the same number of degrees of freedom. 5. In one-way ANOVA the total sum of squares comprises three main sources of variance: within-groups variance, between-groups variance and information variance. Each has its own number of degrees of 3. What is the point of calculating the value of η^2 in relation to particular F- and p-values? [TY10.2] 1. η^2 is a hypothesis-testing measure that can tell us whether a particular F-value is significant. 2. η^2 is the square of α and can tell us whether a particular p-value is significant. 3. η^2 is a measure of effect size that can tell us whether a particular F-value is significant. 4. η^2 is a measure of effect size that can tell us whether a particular p-value is significant. 5. η^2 is a measure of effect size that can tell us how much variance a particular effect accounts for. 4. A researcher, Isobel, conducts one-way analysis of variance in which she compares the final marks of students who have studied psychology at one of five different institutions, A, B, C, D and E. The study looks at the marks of 100 students, 20 from each institution. On the basis of a given theory, the researcher plans to make four comparisons: between A and B, A and C, C and D, and C and E. Three other researchers make the following observations: X: ‘If Isobel used an experimentwise alpha level of .01, a Bonferroni adjustment would mean that each of these tests had an alpha level of .0025.’ Y: ‘If Isobel used an experimentwise alpha level of .05, a Bonferroni adjustment would mean that each of these tests had an alpha level of.0025.’ Z: ‘If Isobel used an experimentwise alpha level of .05, a Bonferroni adjustment would mean that each of these tests had an alpha level of .0125.’ Who is correct? [TY10.4] 1. Only X. 2. Only Y. 3. Only Z. 4. X and Y. 5. X and Z. 5. An experimental psychologist conducts a study examining whether the speed with which two shapes can be identified as similar or different depends on whether the stimuli are (a) of equal or unequal size and (b) symmetrical or asymmetrical. The mean reaction times for the four cells of the design are as follows: equal symmetrical (M = 132 ms), unequal symmetrical (M = 148 ms), unequal asymmetrical (M = 142 ms), unequal asymmetrical (M = 182 ms). Which of the following is true? [TY10.5] 1. A line graph in which these data are plotted suggests that there might only be a main effect for size. 2. A line graph in which these data are plotted suggests that there might only be a main effect for symmetry. 3. A line graph in which these data are plotted suggests that there might only be a main effect for size and an interaction between size and symmetry. 4. A line graph in which these data are plotted suggests that there might only be a main effect for symmetry and an interaction between size and symmetry. 5. A line graph in which these data are plotted suggests that there might be main effects for size and symmetry and an interaction between size and symmetry. 6. Which of the following statements is false? [TY10.6] 1. One difference between ANOVA and t-tests is that ANOVA allows researchers to compare responses of more than two groups. 2. One difference between ANOVA and t-tests is that ANOVA does not make assumptions about homogeneity, normality and independence. 3. One difference between ANOVA and t-tests is that ANOVA can be used to examine simultaneously the impact of more than one variable. 4. One difference between ANOVA and t-tests is that ANOVA is based on analysis of the ratios of variances. 5. One difference between ANOVA and t-tests is that ANOVA uses two separate degrees of freedom (one for between-cells variance, one for within-cells variance). 7. A researcher conducts a study examining the impact of social support on depression in which he studies how four independent groups that each receive a different type of social support (financial, emotional, intellectual, none) react to a stressful experience. There are 20 people in each group. Which of the following statements is true? [TY10.7] 1. There are 4 degrees of freedom for the between-cells variance. 2. There are 78 degrees of freedom for the within-cells variance. 3. If ANOVA yielded a between-groups F-value of −2.18 this would be significant with alpha set at .05. 4. If ANOVA yielded a between-groups F-value of 0.98 this would be significant with alpha set at .01. 5. None of the above statements is true. 8. Which of the following statements about the F-distribution is false? [TY10.8] 1. The distribution is asymmetrical. 2. The distribution is one-tailed. 3. Higher values of F are associated with a higher probability value. 4. The distribution is positively skewed. 5. If the amount of between-cells variance is equal to the amount of within-subjects variance, the value of F will be 1.00. 9. The SPSS ANOVA output below is from a study in which participants were randomly assigned to one of four conditions in which they were given different instructions to encourage them to continue. Which of the following statements is true? 1. There is no possibility at all that the results are due to chance. 2. It would be useful to supplement the p-value with a measure of effect size. 3. With an alpha level of .01, ANOVA reveals a significant effect for Instruction. 4. As groups are randomly assigned, we need to compute a z-score in order to gauge the size of these effects relative to chance. 5. Both (a) and (b). 10. The SPSS ANOVA output below is from a study in which participants were randomly assigned to one of four conditions in which they were given different instructions to encourage them to continue. Which of the following statements is true? 1. ANOVA shows that there was no effect for Instruction. 2. With an alpha level of .05, ANOVA reveals a significant effect for the Instruction. 3. With an alpha level of .01, ANOVA reveals a significant effect for Instruction. 4. With an alpha level of .05, ANOVA reveals a significant effect for Intercept. 5. With an alpha level of .01, ANOVA reveals a significant effect for Intercept. 11. “A hypothetical model in which mean responses differ across the conditions of an experimental design. This represents an alternative to the null hypothesis that the mean response is the same in all conditions.” What is this a glossary definition of? 1. Hypothetical model. 2. Hypothetical difference model. 3. Difference model. 4. Effects model. 5. Experimental model. 12. “Comparisons between every pair of cells in a given experimental design”. What is this a glossary definition of? 1. cross-lagged comparisons 2. pared comparisons 3. pairwise comparisons 4. a priori comparisons 5. post hoc comparisons 13. “Effects which reflect the impact of one independent variable averaged across all levels of other independent variables, rather than the impact of an interaction between two or more independent variables.” What is is this a glossary definition of? 1. Main effects. 2. Average effects. 3. Significant effects. 4. Non-interaction effects. 5. Isolation effects.
{"url":"https://study.sagepub.com/haslamandmcgarty3e/student-resources/multiple-choice-questions/chapter-10-comparing-two-or-more","timestamp":"2024-11-14T15:10:01Z","content_type":"text/html","content_length":"71419","record_id":"<urn:uuid:f8f4e2b8-6fa4-4fdc-a4fd-220fc6292d9f>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00769.warc.gz"}
Formulog: ML + Datalog + SMT If you read a description of a static analysis in a paper, what might you find? There’ll be some cute model of a language. Maybe some inference rules describing the analysis itself, but those rules probably rely on a variety of helper functions. These days, the analysis likely involves some logical reasoning: about the terms in the language, the branches conditionals might take, and so on. What makes a language good for implementing such an analysis? You’d want a variety of features: • Algebraic data types to model the language AST. • Logic programming for cleanly specifying inference rules. • Pure functional code for writing the helper functions. • An SMT solver for answering logical queries. Aaron Bembenek, Steve Chong, and I have developed a design that hits the sweet spot of those four points: given Datalog as a core, you add constructors, pure ML, and a type-safe interface to SMT. If you set things up just right, the system is a powerful and ergonomic way to write static analyses. Formulog is our prototype implementation of our design; our paper on Formulog and its design was just [S:conditionally:S] accepted to OOPSLA 2020. To give a sense of why I’m excited, let me excerpt from our simple liquid type checker. Weighing in under 400 very short lines, it’s a nice showcase of how expressive Formulog is. (Our paper discusses substantially more complex examples.) type base = | base_bool type typ = | typ_tvar(tvar) | typ_fun(var, typ, typ) | typ_forall(tvar, typ) | typ_ref(var, base, exp) and exp = | exp_var(var) | exp_bool(bool) | exp_op(op) | exp_lam(var, typ, exp) | exp_tlam(tvar, exp) | exp_app(exp, exp) | exp_tapp(exp, typ) ADTs let you define your AST in a straightforward way. Here, bool is our only base type, but we could add more. Let’s look at some of the inference rules: (* subtyping *) output sub(ctx, typ, typ) (* bidirectional typing rules *) output synth(ctx, exp, typ) output check(ctx, exp, typ) (* subtyping between refinement types is implication *) sub(G, typ_ref(X, B, E1), typ_ref(Y, B, E2)) :- exp_subst(Y, exp_var(X), E2) = E2prime, encode_ctx(G, PhiG), encode_exp(E1, Phi1), encode_exp(E2prime, Phi2), is_valid(`PhiG /\ Phi1 ==> Phi2`). (* lambda and application synth rules *) synth(G, exp_lam(X, T1, E), T) :- wf_typ(G, T1), synth(ctx_var(G, X, T1), E, T2), typ_fun(X, T1, T2) = T. synth(G, exp_app(E1, E2), T) :- synth(G, E1, typ_fun(X, T1, T2)), check(G, E2, T1), typ_subst(X, E2, T2) = T. (* the only checking rule *) check(G, E, T) :- synth(G, E, Tprime), sub(G, Tprime, T). First, we declare our relations—that is, the (typed) inference rules we’ll be using. We show the most interesting case of subtyping: refinement implication. Several helper relations (wf_ctx, encode_*) and helper functions (exp_subst) patch things together. The typing rules below follow a similar pattern, mixing the synth and check bidirectional typing relations with calls to helper functions like typ_subst. fun exp_subst(X: var, E : exp, Etgt : exp) : exp = match Etgt with | exp_var(Y) => if X = Y then E else Etgt | exp_bool(_) => Etgt | exp_op(_) => Etgt | exp_lam(Y, Tlam, Elam) => let Yfresh = fresh_for(Y, X::append(typ_freevars(Tlam), exp_freevars(Elam))) let Elamfresh = if Y = Yfresh then Elam else exp_subst(Y, exp_var(Yfresh), Elam) typ_subst(X, E, Tlam), | exp_tlam(A, Etlam) => exp_tlam(A, exp_subst(X, E, Etlam)) | exp_app(E1, E2) => exp_app(exp_subst(X, E, E1), exp_subst(X, E, E2)) | exp_tapp(Etapp, T) => exp_tapp(exp_subst(X, E, Etapp), typ_subst(X, E, T)) Expression substitution might be boring, but it shows the ML fragment well enough. It’s more or less the usual ML, though functions need to have pure interfaces, and we have a few restrictions in place to keep typing simple in our prototype. There’s lots of fun stuff that doesn’t make it into this example: not only can relations call functions, but functions can examine relations (so long as everything is stratified). Hiding inside fresh_for is a clever approach to name generation that guarantees freshness… but is also deterministic and won’t interfere with parallel execution. The draft paper has more substantial examples. We’re not the first to combine logic programming and SMT. What makes our design a sweet spot is that it doesn’t let SMT get in the way of Datalog’s straightforward and powerful execution model. Datalog execution is readily parallelizable; the magic sets transformation can turn Datalog’s exhaustive, bottom-up search into a goal-directed one. It’s not news that Datalog can turn these tricks— Yiannis Smaragdakis has been saying it for years!—but integrating Datalog cleanly with ML functions and SMT is new. Check out the draft paper for a detailed related work comparison. While our design is, in the end, not so complicated, getting there was hard. Relatedly, we have also have an extended abstract at ICLP 2020, detailing some experiments in using incremental solving modes from Formulog. You might worry that Datalog’s BFS (or heuristic) strategy wouldn’t work with an SMT solver’s push/pop (i.e., DFS) assertion stack—but a few implementation tricks and check-sat-assuming indeed provide speedups. 3 Comments 1. cool 2. can you guys write a tutorial that targets a really basic language please 1. That’s a very good idea! I have an undergraduate student who will be helping me convert Smaragdakis and Balatsouras’s “Pointer Analysis” tutorial, but that’s for a modestly complex language. I’ll see what we can do.
{"url":"https://www.weaselhat.com/post-835.html","timestamp":"2024-11-04T14:12:50Z","content_type":"text/html","content_length":"41735","record_id":"<urn:uuid:33a24291-e8fa-4958-b3e2-8168a30dc7d0>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00133.warc.gz"}
These four questions will make your students think about potential These four questions will help your students think about potential I have been going through a lot of the clicker-type conceptual physics questions that I have collected over the years from other physics teachers who have been gracious enough to share them, when I came across a sequence of four questions related to the potential and potential energy of a three charge system. I really like the set of questions, but I’m looking for some feedback from other teachers on how to best use them. Here’s the system: Three charges form a triangle at points 1, 2 and 3. The electrical potential at q₃ has some value V₃. (This setup of the system is the same for all four questions below.) Here are the questions: 1. If the charge of q₃ is doubled, the value of the electrical potential at the point 3 will: A.) increase by a factor of two B.) increase by a factor of four C.) decrease by a factor of two D.) decrease by a factor of four E.) remain the same 2. If the charges q₁, q₂, and q₃ is doubled, the value of the electrical potential at point 3 will: A.) increase by a factor of two B.) increase by a factor of four C.) increase by a factor of eight D.) decrease by a factor of four E.) remain the same 3. If the charge of q₃ is doubled, the value of the electrical…
{"url":"https://achmorrison.medium.com/these-four-questions-will-make-your-students-think-about-potential-53f6f00eda89","timestamp":"2024-11-03T00:50:28Z","content_type":"text/html","content_length":"90040","record_id":"<urn:uuid:791c5980-0680-47fb-bad0-42bafcc13f1c>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00195.warc.gz"}