content
stringlengths
86
994k
meta
stringlengths
288
619
Your Daily Equation Brian GreenePhysicist, Author Brian Greene is a professor of physics and mathematics at Columbia University, and is recognized for a number of groundbreaking discoveries in his field of superstring theory. His books, The Elegant Universe, The Fabric of the Cosmos, and The Hidden Reality, have collectively spent 65 weeks on The New York Times bestseller list. Read More
{"url":"https://www.worldsciencefestival.com/playlists/your-daily-equation/?video=64694","timestamp":"2024-11-12T03:54:43Z","content_type":"text/html","content_length":"177099","record_id":"<urn:uuid:a359ad09-75ab-44fb-a110-318e904d16d8>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00868.warc.gz"}
A brief introduction to Linear Transformation Indian Summer Painting. Image Source: The Metropolitan Museum of Art In data analysis, we may obtain greater insight while expressing a variable in different form. For e.g. you could use different scale to better visualize the variables that have points close by. Many statistical tests assume that the data is normally distributed. Hence if the underlying data is not normal, we need to transform a data to make it near normal before we apply these tests. Sometimes two variables in the data-set are not comparable. For e.g. a height of a person can be measured in inches. The same height, without loss of information can be measured in meters or in feet instead. Hence we must transform these variables into a common scale before proceeding with the analysis. A transformation is mathematical expression that defines one-to-one correspondence between the numeric systems. For e.g. a height in inches can be transformed into cms using a rule: 1 cm = 2.54*inches. Here every point on cm scale will be able to be matched uniquely to one and only one number on inch scale. There are two types of transformation: linear transformation and nonlinear transformation. Linear transformation preserves the shape of original distribution, while in non-linear transformation, the shape of the original distribution is changed. In this tutorial we will discuss the linear transformation. Linear transformation Linear transformations are mathematical expressions that include only combination of addition, subtraction, multiplication or division to set-up the correspondence between the numeric systems. for e.g. if we used original numbers for measuring the player’s heights in both countries, these numbers would not be comparable because of different measurement systems used in both countries (In US, height is measured in inches or feets, in India, it is measured in cm or meters). In this case, to make both values compatible to each other, we will have to multiply the values of height in inches by 2.54 since there are 2.54 centimeters per inch. In the data-frame, player_heights, the heights in inches and in centimeters are presented below: player_height <- data.frame(Players = sprintf("Player_%01d", 1:20), Height_inch = sample(seq(from = 68, to = 83, by = 1) , size = 20, replace = TRUE)) player_height <- player_height %>% mutate(Height_cm = Height_inch*2.54) Data set Because, rule to convert inches into centimeters involves multiplying each value in the distribution by positive constant, it is an example of linear transformation. Notice that order of data points is retained under this transformation. Player_8 is taller than Player_7 regardless of whether height is measured in cm or in inches. Linear transformation may involve multiplication, addition, or combination of multiplication and addition. The general form of linear transformation is given below: \[X_{new} = K*X_{old}+C\]. where, K and C are constants and K is not equal to 0. $K$ represents both multiplication and division. For, e.g. K=1/2, multiplication by K is equivalent to division by 2. Also, C represents both addition and subtraction. For e.g. C=-4, adding C units is equivalent to subtracting 4 units. K changes the scale of the original values: • 1<K<-1: stretching the horizontal axis • -1<K<1: compressing the horizontal axis C translates the location of original values: • C>0: the location of original values are shifted up along the horizontal axis. • C<0: the location of original values are shifted down along the horizontal axis. Below example illustrates the linear transformation where K=2 and C=10. X <- sample(1:10,100,replace=TRUE) par(mfrow = c(1, 2)) hist(X,xlim = c(0,40),breaks = c(0:40),main = "Original data") hist(2*X+10,xlim = c(0,40),breakes = c(0:40),main="Transformed data") Linear transformation Notice that the scale of new values is doubled (with each unit increase in X, there is two-unit increase in $2X+10$ ) and the location of new values is shifted in positive direction along the horizontal axis from 1 to 12. Effect of linear transformation on the shape of distribution Linear transformation also retains the relative distance between points in the distribution. That is two points that are far from one another in the original data will be as far from one another to the same extent in the transformed data. In the data-frame, player_heights, the Player_5’s height(70 inch.) is closer to the player_8’s height(73 inch.) than to Player_4’s height(81 inch.), to the same extent in both inches and centimeters. Because relative distance does not change in case of linear transformation, neither does the general shape of distribution under linear transformation. Let’s create a box-plot for heights in inches and centimeters. boxplot(player_height$Height_cm , player_height$Height_inch , names=c("height cm","height inch")) Effect of liner transformation on shape of distribution clearly two box-plots have the same shape. By transforming the data linearly from inches to centimeters, the general shape of the distribution remains same, although central tendency and spread can change as in they have in this example. Effect of linear transformation on summary statistics of a distribution lapply(player_height[-1], function(x) c(std_dev = sd(x), avg = mean(x))) std_dev avg 4.671019 75.650000 std_dev avg 11.86439 192.15100 Notice the difference between central tendency or spread of distribution for the two box-plots. the mean height in centimeters (192.3) equals to mean height in inches (75.65) multiplied by 2.54 and the standard deviation of the heights in centimeters(11.86) equals to standard deviation of the heights in inches(4.67) multiplied by 2.54. In other words, both mean and standard deviation are multiplied by 2.54, the value by which the heights themselves were multiplied. While the new standard deviation is larger than the original standard deviation by the factor of 2.54, the new variance will be larger than the original variance by a factor of 2.54^2 = 6.45 The effect on the measure of central tendency, spread, and shape of the distribution when multiplying all values in a distribution by nonzero constant, K, are summarized as follows, \[when, X_{new} = K*X_{old}\] For measures of central tendency: • Mode_new = K*Mode_old • Median_new = K*Median_old • Mean_new = K*Mean_old Measures of spread: • IQR_new = K*IQR_old • SD_new = K*SD_old • Range_new = K*Range_old • Variance_new = K*Variance_old Shape of a distribution depends on sign of K: • If K is positive: Skewness_new = Skewness_old • If K is negative: Skewness_new = (-1)*Skewness_old The following box-plots give an example of additive transformation. In this example, we have created side-by-side box-plots of original data, original data + 10 and 2*original data -10. The central tendency of the distribution is given by the median of the box-plot. Here, the operation that was performed on every score was also performed on the median. The median of the original un-transformed data is 6. After adding 10 to every point, the median of the second box-plot is 16. Similarly, median of third box-plot from left is -4, or 10 less than un-transformed variable. The spread is given by IQR of the distribution. Here we see that by adding or subtracting, we do not change the spread of distribution. Similarly, shape of the distribution is unchanged by the transformation. X <- sample(1:10,100,replace=TRUE) names=c("original data","plus 10","minus 10")) Additive transformation The effect of summary statistics of additive transformation is described below: \[X_{new} = C + X_{old}\] For measures of central tendency: • Mode_new = Mode_old + C • Median_new = Median_old + C • Mean_new = Mean_old + C Measures of spread: • IQR_new = IQR_old • SD_new = SD_old • Range_new = Range_old • Variance_new = Variance_old Shape of a distribution: If K is positive: Skewness_new = Skewness_old Z-score: the most Common Linear transformation In order to make the data compatible in different units, the most widely used transformation is z-score transformation. z-score measures number of standard deviation a data value is from its mean value within a distribution. A z-score distribution has mean of 0 and standard deviation of 1. Any data distribution may be re-expressed as z-score by following equation: \[Z = \frac{X-\bar{X}}{S}\] X-bar is mean of distribution X is individual observation S is standard deviation of distribution The z-transformation is calculated as: sapply(player_height[-1], function(x) (x-mean(x))/sd(x)) Thus, by transforming player heights’ in inches and centimeters into z-scores, we gain knowledge as to the placement of each players’ height in that distribution relative to the mean and standard deviation of the heights in that distribution. Because conversion of z-scores involves linear transformation, the order of players’ heights relative to the original distribution is preserved in the z-distribution. If Player_6 is taller than Player_5 in the original distribution, so will it be in the z-distribution as well. The shape of z-score distribution will be same as that of original distribution, except for possible shrinking or stretching along X-axis. you can also observe that z-scores for both the variables are same, since one is simply a linear transformation of the other. There are two types of transformation: linear transformation and nonlinear transformation. Linear transformation preserves the shape of original distribution, while in non-linear transformation, the shape of the original distribution is changed. In this article, we saw an example of linear transformation. We learned: • Linear transformation involves multiplication, addition, or combination of multiplication and addition of each data value by some constant. • The effect of linear transformation on shape of the data • The effect of linear transformation on summary statistics of the data • The z-score, most common type of linear transformation. Leave a Comment
{"url":"https://plotlyanalytics.com/a-brief-introduction-to-linear-transformation/","timestamp":"2024-11-09T07:41:22Z","content_type":"text/html","content_length":"211356","record_id":"<urn:uuid:db0a0cdf-67e2-437f-8c04-dd1cc551b967>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00700.warc.gz"}
How to Perform a Pytorch Matrix Dot Product - reason.townHow to Perform a Pytorch Matrix Dot Product How to Perform a Pytorch Matrix Dot Product If you’re looking to learn how to perform a Pytorch matrix dot product, you’ve come to the right place. In this blog post, we’ll walk you through the steps necessary to get the job done. By the end, you’ll be a Pytorch matrix dot product expert! Checkout this video: A matrix dot product is a fundamental operation in many numerical computations, including deep learning. The basic idea is to take two matrices, A and B, and multiply them together to produce a third matrix C. In Python, this can be done using the Pytorch library like so: First, we need to import the Pytorch library: import torch Now, we can define our matrices A and B: A = torch.tensor([[1,2], [3,4]]) B = torch.tensor([[5,6], [7,8]]) To perform the matrix dot product, we simply use the torch.mm() function: C = torch.mm(A,B) What is a Pytorch Matrix Dot Product? A Pytorch matrix dot product is a mathematical operation that takes two matrices and produces a third matrix as a result. The dot product is ALso known as the inner product or scalar product. How to Perform a Pytorch Matrix Dot Product Performing a matrix dot product in Pytorch is a simple operation that can be accomplished in just a few lines of code. First, we need to import the Pytorch library and create two matrices. Next, we’ll use the Pytorch dot function to perform the matrix dot product operation. Here’s a complete example: import torch # Create two matrices matrix_1 = torch.Tensor([[1,2], [3,4]]) matrix_2 = torch.Tensor([[5,6], [7,8]]) # Perform the matrix dot product operation result = torch.dot(matrix_1, matrix_2) Why Perform a Pytorch Matrix Dot Product? There are many reasons why you might want to perform a Pytorch matrix dot product. Perhaps you are trying to find the relationships between different variables, or you are trying to improve the performance of your machine learning algorithms. In any case, the Pytorch matrix dot product is a powerful tool that can help you achieve your goals. When to Perform a Pytorch Matrix Dot Product In mathematics, the dot product or scalar product or inner product between two vectors is a means of combining their magnitudes and directions. If both vectors have the same units (e.g. inches), then the result is a unitless scalar. The dot product may be applied to any two equal-length sequences of numbers, whether they are vectors in $\mathbb{R}^n$ or not. How to Optimize a Pytorch Matrix Dot Product The matrix dot product is a fundamental operation in linear algebra, and is used extensively in machine learning and other numerical computing tasks. In Pytorch, the matrix dot product is performed using the torch.mm() function. There are a few things you can do to optimize the performance of your Pytorch matrix dot products. First, make sure you are using the latest version of Pytorch – older versions may not be as efficient. Second, use the correct data type for your matrices – for example, if your matrices are real-valued, use the torch.FloatTensor type rather than torch.DoubleTensor. Finally, use pytorch’s built-in caching functions to cache your results and avoid recomputing them unnecessarily. We’ve seen how to perform a Pytorch matrix dot product on two matrices, A and B, by using the Pytorch function torch.mm(). We’ve also seen how we can perform this operation on a batch of matrices, by using the Pytorch function torch.bmm(). Finally, we’ve seen how we can use the Pytorch function torch.mv() to perform a matrix-vector multiplication.
{"url":"https://reason.town/pytorch-matrix-dot-product/","timestamp":"2024-11-14T17:00:50Z","content_type":"text/html","content_length":"90268","record_id":"<urn:uuid:991d126c-bcf1-4c4a-ba16-7d1fc884cc32>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00083.warc.gz"}
Seminars — Lisbon Young Researchers Seminar. – Europe/Lisbon Room P3.10, Mathematics Building — Online Damián Mayorga Pena, Instituto Superior Técnico, Universidade de Lisboa Machine learning Calabi-Yau metrics Ricci flat metrics for Calabi-Yau threefolds are not known analytically. In this talk, I will discuss techniques from machine learning to deduce numerical flat metrics on Calabi-Yau two- and three-folds. In particular, I will focus on a particular type of approximation known as spectral neural networks. This type of network produces an exact Kähler metric. I will discuss the metric approximation for various examples, with particular focus on the Cefalú family of quartic two-folds, for which we study the corresponding characteristic forms. Furthermore, from the computation of the Euler characteristic, I will demonstrate that the numerical computations match the expectations, even in the case of singular geometries.
{"url":"https://young.math.tecnico.ulisboa.pt/seminars?id=7261","timestamp":"2024-11-06T15:11:23Z","content_type":"text/html","content_length":"7497","record_id":"<urn:uuid:f9473dc9-c56d-44f1-a674-d1ba095c456b>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00766.warc.gz"}
Singularities of mean convex level set flow in general ambient manifolds We prove two new estimates for the level set flow of mean convex domains in Riemannian manifolds. Our estimates give control – exponential in time – for the infimum of the mean curvature, and the ratio between the norm of the second fundamental form and the mean curvature. In particular, the estimates remove a stumbling block that has been left after the work of White [16,17,20], and Haslhofer–Kleiner [9], and thus allow us to extend the structure theory for mean convex level set flow to general ambient manifolds of arbitrary dimension. Bibliographical note Publisher Copyright: © 2018 Elsevier Inc. • Level set flow • Mean curvature flow Dive into the research topics of 'Singularities of mean convex level set flow in general ambient manifolds'. Together they form a unique fingerprint.
{"url":"https://cris.huji.ac.il/en/publications/singularities-of-mean-convex-level-set-flow-in-general-ambient-ma","timestamp":"2024-11-04T23:37:35Z","content_type":"text/html","content_length":"46233","record_id":"<urn:uuid:2d15ce25-d90f-4af2-a5bf-c518e8f9cb1f>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00560.warc.gz"}
1. Consider the collection of measurable rectangles in \(\R^2\): \(S = \{ (a_1,b_1] \times (a_2,b_2] ~ \colon ~ a_i < b_i ~ i=1,2 \}\) Show that $S$ is a semi-ring of sets. 2. Let \(f\) be an integrable function on a measure space. Show that for every \(\epsilon > 0\), there is a \(\delta > 0\) such that for all \(m(A) < \delta\), \(\int_A f < \epsilon\). Hint: Start with \(f \geq 0\), and the identity \(\int_A f = \int_A f 1_{f > L} + \int_A f 1_{f \leq L}\) Then let $L \to \infty$. Note the “uniformity” in the statement: the integral over \(A\) is small for all small \(m(A)\). This is related to the notion of uniform integrability we will encounter. 3. Let \(f(x) = sin(x)/x\). Show that the (Lebesgue) integral \(\int_{(0,\infty)} f\) is not defined, but \(\lim_{t \to \infty} \int_{(0,t)} f\) exists. Hint: First control \(\int_{(0,\pi)} f\) using calculus, and then estimate \(\int_{(2n \pi,(2n + 2)\pi)} f\) using standard inequalities.
{"url":"https://courses.math.rochester.edu/current/471/exams/","timestamp":"2024-11-03T09:23:20Z","content_type":"text/html","content_length":"5329","record_id":"<urn:uuid:db230ecf-3e5b-40cf-8d1b-1891bfdd3ca7>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00884.warc.gz"}
Impact of Sub-Economic on Money Supply in Nigeria An Autoregressive Distribution Lag (ARDL) Approach Impact of Sub-Economic on Money Supply in Nigeria: An Autoregressive Distribution Lag (ARDL) Approach () 1. Introduction The demand for money refers to the total amount of riches held by the households and companies; this is affected by several factors such as income levels, interest rates, price levels (inflation) and uncertainty. The impact of these factors on demand for money can be attributed to these three reasons: transaction, precautionary and speculative. The demand for money function creates a contextual to review the efficacy of monetary policies, as an imperative issue in terms of the overall macroeconomic stability. Money demand is an important indicator or pointer of growth for a particular economy. [1] affirmed that increment in money demand mostly indicates a country’s improved economic situation, as against the falling demand which is normally a sign of abating economic climate. Monetarists accentuate the role of governments in controlling for the amount of money in circulation. Their assessment on monetary economics is that the variation on money supply has a major influence on national product in the short run and on price level in the long run. As well, they claim that the objectives of monetary policy are paramountly met by steering the increment rate of money supply. Today’s monetarism is allied with the work of Friedman, who was one among the generation of economists to agree to take Keynesian economics and then disparage it on its own terms. Friedman debated that inflation is at all times and universally a monetary phenomenon. Similarly, he backed that central bank policy aimed at keeping the supply and demand for money in equilibrium, as measured by growth in productivity and demand [2]. For instance, the European Central Bank formally bases its monetary policy on money supply goals. Adversaries of monetarism, including neo-Keynesians, debated that demand for money is central to supply of money and the money supply is controlled by its Central Bank, for example, Central Bank of Nigeria (CBN) while some conservative economists disputed that demand for money cannot be predicted. Recently, the rise in Nigeria’s exchange rates is being acknowledged as the most imperative threat to the country’s economy. The exchange rates instability by the means of uncertainty generated, affects negatively economic decisions of investors, implementing of growth policy, the fight against unemployment and economic convergence [3]. [4] pointed out that the monetary policy has no short-term impact on the price changes. In other words, monetary and fiscal policy tools are less important to control the variations of general price level in Nigeria in the short term. On divergent, some authors remark that the short-run and long run effects of money supply are significant [5] [6] [7]. There are short-term and long-term aspects of money demand. The growing production relates to the long-term aspect of money demand or the need for money (transaction demand). This means that the increased issue of money which is consistent with price stability may solely be achieved in the long run if it follows the growth of output. [1] stated that the increased issue of money which is consistent with price stability may solely be achieved in the long run if it follows the growth of output. In the short term, a decreasing rate of money circulation may cause the money demand to rise irrespective of the movements in real production. However, the ongoing increase in money supply, regardless of the trends in production, leads to the stronger inflator pressures. Hence, this study set out to examine the relationship between money supply and other macroeconomic time series. [8] studied the money demand functions for long run and short run for Nepal using the annual data set of 1975 to 2009. The ARDL modeling to cointegration had used to analysis cointegration. The bounds test shows the exists of long run cointegration relationship among demand for real money balances, real GDP and interest rate in case of both narrow and broad monetary aggregates. Furthermore, the CUSUM and CUSUM SQ test reveals that both the long run narrow and broad money demand functions are unchanging (stable). [9] queried velocity of money demand function and its relationship with interest rate fluctuations of Pakistan data. The results established stable money demand function via velocity of money, real permanent income per capita, real interest rate, transitory income, and expected inflation. It revealed that money velocity is independent from interest rate. [10] revisited money demand function for Japanese economy. The results showed that instability in money demand due to many changes in monetary policy of Japan. [11] tested the stability of money demand function for Tonga using approaches of LSE Hendry’s General to Specific (GETS) and Johansen’s Maximum Likelihood (JML). The results projected that there is a stable long run cointegrated relationship that exists between real narrow money, real income and rate of interest. [12] examined whether financial innovation makes money demands is stable or not in Kenya. They used quarterly data (1998Q4 to 2013Q3) and utilized ARDL bounds test. They found out that in the face of financial innovation, money demand in Kenya is stable. Similarly, an earlier study by [13] examined the effect of financial liberalization on money demand in Uganda based on data (1982Q4 to 1998Q4). He employed Johansen cointegration test and found that M2 and its determinants are cointegrated. Thereafter, he used Chow test to assess the stability of the money demand during the period when a financial reform was implemented in the study. The author found out that the introduction of financial liberalization does not make M2 unstable in Uganda. [14] measured monetary aggregate using M2, and employed a Johansen test. The study’s findings showed that M2 and its determinants are cointegrated. Based on the results from Hansen, CUSUM and CUSUMSQ tests, the author concluded that demand for M2 is stable in Nigeria. [15] used another cointegration technique called ARDL bounds test. The author used quarterly data over the period of 1970Q1 to 2002Q4. The author measured monetary aggregate using M2 and found it to be cointegrated with its determinants. The study further tested for parameter consistency test using CUSUM and CUSUMSQ tests and the results obtained by [16] are mixed. The result from CUSUM showed that M2 is stable while the finding from CUSUMSQ showed that M2 is unstable. [16] examined the stability of money demand function a case study of Turkey. Johansen cointegration confirmed the long run cointegration relationship among money demand, income and interest rate. Conflictingly, [17] reexamined the money demand function for Turkey and he found an asymmetric behavior and nonlinearity function. His results projected that the stability of money demand function is influenced by upon stability of inflation for monthly data from1990 to 2012. [18] investigated the money demand function a case study of Nigeria over the period 1970 to 2010. They have established stability of M1 by using Chow breakpoint test, (CUSUM) and (CUSUMSQ) tests by incorporating real income, short term interest rate, real expected exchange rate, expected inflation rate and foreign real interest rate. Meanwhile, this research feasibly will be of extraordinary importance not only to the scholar as regards the use of statistical tools in the analysis of money demand; drawing conclusion and decision making from available data. It could also contribute to the available proposed literature on the concept of money demand in the scientific communal used by experienced top practitioners all around the world. 2. Aim and Objectives This study aims at providing a comprehensive analysis of money demand while the specific objectives are: 1) To estimate the effect of financial development on money demand. 2) To analyze the relationship between money demand and other macroeconomic variables in Nigeria. 3) To brings to light the short and long run impacts of money demand on inflation and other macroeconomic variables in Nigeria. 3. Research Methodology 3.1. Source of Data The nature of this study required the usage of secondary data. Data utilized are quarterly time series and covers a period of 1991 to 2018; they are sourced from Central Bank of Nigeria database. The analyses are carried out using the EViews 9.0 package. 3.2. Research Methodology 3.2.1. Regression Model (Ordinary Least Square Method) A priori Expectation: $C>0$ , ${\beta }_{1}>0$ , ${\beta }_{2}>0$ . The ordinary Least Square (OLS) technique will be employed in obtaining the numerical estimates of the coefficients of the equation. The OLS method is chosen because it possesses some optimal properties; its computational procedure is fairly simple and it is also an essential component of most other estimation techniques. The regression model is given as ${Y}_{i}={\beta }_{0}+{\beta }_{1}{X}_{i}+{\epsilon }_{i},\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=1,2,\cdots ,n$(3.1) where ${Y}_{i}$ and ${X}_{i}$ are the dependent and independent in the ith observations respectively. ${\beta }_{0}$ and ${\beta }_{1}$ are unknown and are usually obtained by method of Least Square, and ${\epsilon }_{i}$ is the error term. The least square estimates in this case are given by simple formulas. ${\stackrel{^}{\beta }}_{1}=\frac{\sum {x}_{i}{y}_{i}-\frac{1}{n}\sum {x}_{i}\sum {y}_{i}}{\sum {x}_{i}^{2}-\frac{1}{n}{\left(\sum {x}_{i}\right)}^{2}}$(3.2) ${\stackrel{^}{\beta }}_{0}=\stackrel{¯}{y}-\stackrel{^}{\beta }\stackrel{¯}{x}$(3.3) 3.2.2. Auto-Regressive Distributed Lagged (ARDL) Model The autoregressive distributed lag (ARDL) models are the standard ordinary least squares regressions, which include the lags of both the dependent variable and independent variables as regressors (Erdoğdu H. and Çiçek H., 2017). The basic form of an ARDL (p, q) regression model is given as: ${Y}_{t}={\beta }_{0}+{\beta }_{1}{Y}_{t-1}+\cdots +{\beta }_{p}{Y}_{t-p}+{\alpha }_{0}{X}_{t}+{\alpha }_{1}{X}_{t-1}+\cdots +{\alpha }_{q}{X}_{t-q}+{\epsilon }_{t}$ ${Y}_{t}={\beta }_{0}+{\sum }_{i=1}^{p}{\beta }_{i}{Y}_{t-i}+{\sum }_{i=0}^{q}{\alpha }_{i}{X}_{t-i}+{\epsilon }_{t}$(3.4) where ${\epsilon }_{t}$ is a disturbance term, the dependent variable is a function of its lagged values, the current and lagged values of other exogenous variables in the model; p lags are used for dependent variable while q lags are for exogenous variables. The bounds testing procedure, developed by [16], requires the estimation of the following equation, which derives the relationship between money supply (M2) and its determinants, exchange rates (EXR), inflation rate (IFR), credit to private sector (CPS) and currency in circulation (CIC) as a conditional autoregressive distributed lag $\begin{array}{c}\Delta \text{LM}{2}_{t}={\alpha }_{0}+{\sum }_{i=1}^{p}{\alpha }_{i}\Delta {\text{LM2}}_{t-i}+{\sum }_{i=1}^{{q}_{1}}{\alpha }_{2i}\Delta {\text{EXR}}_{t-i}+{\sum }_{i=1}^{q}{\alpha }_{3i}\Delta {\text{IFR}}_{t-i}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{\sum }_{i=1}^{{q}_{3}}{\alpha }_{4i}\Delta {\text{CPS}}_{t-i}+{\sum }_{i=1}^{{q}_{4}}{\alpha }_{5i}\Delta {\text{CIC}}_ {t-i}+{\beta }_{1}{\text{LM2}}_{t-1}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{\beta }_{2}{\text{EXR}}_{t-1}+{\beta }_{3}{\text{IFR}}_{t-1}+{\beta }_{4}{\text{CPS}}_{t-1}+{\beta }_{5}{\text {CIC}}_{t-1}+{\epsilon }_{t}\end{array}$(3.5) where LM2 is the natural log of money supply, ∆ is the first difference operator $p,{q}_{1},{q}_{2},{q}_{3}$ and ${q}_{4}$ are the lag lengths. The null hypothesis in the long-run is ${H}_{0}:{\beta }_{1}={\beta }_{2}={\beta }_{3}={\beta }_{4}={\beta }_{5}=0$ which implies no cointegration. The computed F-statistic is compared with critical values or p-values. If the F-test statistic falls less than the lower bound signifies is no cointegration. If the F-test statistic is greater than the upper bound, it signifies cointegration. Conversely, if the F-statistic lies between both critical values, it signifies inconclusive. If a long-run relationship among the variables is established (cointegration presence), then the long-run model(s) is/are estimated using Error Correction Term (ECM) while for short-run relationship (no cointegration) ARDL model(s) is/are estimated. The long-run relationship model is specified in the Equation (3.4): $\begin{array}{c}\Delta \text{LM}{2}_{t}={\alpha }_{0}+{\sum }_{i=1}^{q}{\alpha }_{i}\Delta {\text{LM2}}_{t-i}+{\sum }_{i=1}^{{p}_{1}}{\alpha }_{2i}\Delta {\text{EXR}}_{t-i}+{\sum }_{i=1}^{{p}_{2}}{\ alpha }_{3i}\Delta {\text{IFR}}_{t-i}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{\sum }_{i=1}^{{p}_{3}}{\alpha }_{4i}\Delta {\text{CPS}}_{t-i}+{\sum }_{i=1}^{{p}_{4}}{\alpha }_{5i}\Delta {\text {CIC}}_{t-i}+{\lambda }_{6}{\text{ECT}}_{t-1}+{\epsilon }_{t}\end{array}$(3.6) where ${\lambda }_{6}$ is the coefficient of the error (or equilibrium) correction term (ECT). A negative and statistically significant error correction term ensures convergence of the dynamics to the long-run equilibrium. The significance of the error correction model provides further confirmation to the co-integration evidence, giving the impression of a long run movement between economic growth and the explanatory variables. Implying that in the incidence of the presence of external shock resulting to disequilibrium of the system, the model can still converge with time to its normal state with a relatively average speed of adjustment of ${\lambda }_{6}%$ percent per time. Conversely, for the short-run relationship model; $\text{ARDL}\left(p,{q}_{1},{q}_{2},{q}_{3},{q}_{4}\right)$ is stated in Equation (3.5). $\begin{array}{c}\Delta {\text{LM2}}_{t}={\alpha }_{0}+{\sum }_{i=1}^{q}{\alpha }_{i}\Delta {\text{LM2}}_{t-i}+{\sum }_{i=1}^{{p}_{1}}{\alpha }_{2i}\Delta {\text{EXR}}_{t-i}+{\sum }_{i=1}^{{p}_{2}}{\ alpha }_{3i}\Delta {\text{IFR}}_{t-i}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+{\sum }_{i=1}^{{p}_{3}}{\alpha }_{4i}\Delta {\text{CPS}}_{t-i}+{\sum }_{i=1}^{{p}_{4}}{\alpha }_{5i}\Delta {\text {CIC}}_{t-i}+{\epsilon }_{t}\end{array}$ . (3.7) 4. Presentation and Analysis of Data Presentation of Data The data are quarter and generally covers the period from 1991 to 2018. E-Views 9.0 analysis package is utilized to carry out all the analysis in this study. Table 1 presents the variables descriptions of the time series data considered in this study. Table 2 displays the descriptive statistics for the data. As observed, M2 has mean, median, maximum and minimum of 7184.08, 2415.83, 27,068.58 and 71.03 respectively for the time period examined. M2 has standard deviation and Jarque-Bera statistic value of 8150.99 and 16.70 respectively with p-value of 0.0002. EXC has mean, median, maximum and minimum of 119.92, 127.99, 306.21 and 9.52 respectively for the time period examined. EXR also, has standard deviation and Jarque-Bera statistic value of 81.78 and 6.90 respectively with p-value of 0.0318. IFR has mean, median, maximum and minimum of 19.15, 12.20, 78.50 and 2.30 respectively for the time period examined. And has standard deviation and Jarque-Bera statistic value of 17.31 and 93.17 respectively with p-value of 0.0000. Furthermore, CPS has mean, median, maximum and minimum of 7191.80, 2067.16, 22,967.44 and 79.96 respectively for the time period examined. And has standard deviation and Jarque-Bera statistic value of 8110.20 and 14.63 respectively Note: L denotes natural logarithm. Variables are in local currency (Naira). with p-value of 0.0006. Also, CIC has mean, median, maximum and minimum of 2691.25, 987.74, 9839.58 and 25.13 respectively for the time period examined. Its standard deviation and Jarque-Bera statistic value are 0.8839 and 15.73 respectively with p-value of 0.0004. However, M2, CPS and CIC data are converted into natural logarithm in order to stabilize the variance. From the descriptive statistics results and considering the p-values of the variables, this can be deduced; the p-values confirm abnormality for all the variables at 1% level of significance. Figure 1 and Figure 2 present the time series plots of the series. In Figure 1, it shows that LM2 has been gradually increasing over the years. EXR was relatively constant from 1991 to 1998 but rose abruptly in 1999 and maintained a steady increment from 2000 to 2014 and skyrocketed in 2015 and 2016. IFR also increased gradually over the years; 1991 to 1996 but declined abruptly in 1997 and oscillated over the period 1998 to 2016. LCPS and LCIC have been on gradual increment over the years (see Figure 1). The plots also show that all the series exhibit non-stationary behaviour. The augmented Dickey Fuller test is used to formally test for stationarity in the time-series. 1) Test for Stationarity The results from the ADF test with a linear time trend are reported in Table 3. Using the ADF test, the unit root cannot be rejected for all the four variables at 5% level of significance which conforms to the time series plots earlier presented. The ADF test with trend is further used at the 1st difference, the unit root can be rejected for all the five (5) variables at 5% level of significance. Figure 2 presents the stationary series of the variable at first difference. Also, it can be deduced and established that ARDL model is appropriate since data are stationary purely at first difference. 2) Regression Model (Ordinary Least Square Method) We made use of the econometric procedure to estimate the relationship between the variables. The ordinary Least Square (OLS) technique is employed to obtain the numerical estimates of the coefficients of the equation. The OLS method is chosen because it possesses some optimal properties; its computational Figure 2. Plots of the differenced time series. Table 3. Unit root test (ADF test). Note: $\stackrel{^}{k}$ is the AIC lag term is used to select the optimal lag, to make the residuals white noise. procedure is fairly simple and it is also an essential component of most other estimation techniques. The regression estimation results (Table 4) show that the relationship between the dependent LM2 and independent variable EXR, LCPS, LCIC and intercept C, are the significant relationships except for IFR. However, the regression model (4.1) is spurious model since the R-squared is GREATER than the Durbin-Waston statistics (i.e. 0.9987 > 0.6476). Therefore, the regression model’s residual is tested for Cointegration using Engle-Granger residual approach (see Table 5). Table 4. Regression model estimation. Table 5. Engle granger residual cointegration test. 3) Cointegration Test Table 5 presents the Cointegration test results of the regression model (4.1) residual. The results show that series; LM2, EXR, LCPS and LCIC p-values are significant (less than 1%), therefore we can reject the H[0] in favor of cointegration for all the series except IFR with p-value of 1.0000. Hence, the results affirm the presence of Cointegration (variables co-move) among the variables; LM2, EXR, LCPS and LCIC. Moreover, as results of the presence of Cointegration among the variables (see Table 5), it is crucial to know the nature or significance of the variables’ co- movement. The pairwise Granger Causality tests were carried out; Table 6 presents the tests’ results. Table 6 depicts that the pairwise Granger Causality test is significant for four pairs of the variables considered. As observed, LCPS does granger cause LM2, LM2 does granger cause LCPS, LCIC does granger cause LCPS and LCPS does granger cause LCIC significant at 1%, 10%, 10% and Table 6. Pairwise granger causality tests. Note: *, ** and *** denote significant at 10%, 5% and 1% respectively. respectively. However, EXR and IFR show no significant granger causes for any variables. Thus, subsequent analysis depicts the cointegration models (ARDL and VECM) of the variables. 4) Autoregressive Distribution Lags Estimation Table 7 presents the Bound test results of the ARDL models. The results show that EXR and IFR as dependent variable do not exhibit long-run relationship (no Cointegration) with their corresponding exogenous variables. Thus, the null hypothesis for the long-run relationship can be rejected for only the number 1, 4 and 5 ARDL models (LM2, LCPS and LCIC as dependent variable). The ARDL models (short-run relationship) for EXR and IFR are specified (see model 4.2, and 4.3). $\begin{array}{c}\Delta DEXR=1.2206\Delta EXR\left(-1\right)-0.2631\Delta EXR\left(-2\right)+14.6556\Delta LM2\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-0.0026\Delta IFR-4.3769\Delta LCPS-7.3744 \Delta LCIC-24.2920\end{array}$(4.2) $\begin{array}{c}\Delta IFR=1.4176\Delta IFR\left(-1\right)-0.1883\Delta IFR\left(-2\right)-0.5156\Delta IFR\left(-3\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+0.2216\Delta IFR\left(-4\ right)-0.5504\Delta LM2+0.0021\Delta EXR\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+0.6262\Delta LCPS-0.5629\Delta LCIC+4.2937\end{array}$(4.3) Note: * indicates significant at 0.05 level (i.e. F-Stat > 4.01 critical value). The long-run relationships exhibited by LM2, LCPS and LCIC and exogenous variables are estimated by means of vector error correction model (VECM). 5) Vector Error Correction Model Estimation The existence of cointegration between LM2, EXR, IFR, LCPS and LCIC and as the results of the ARDL bound test, they lead us to apply Granger causality test to perform clear picture of causality relationship among these variables. Table 8 presents the cointegration equation and depicts the long run relationship between LM2, EXR, IFR, LCPS and LCIC. The results explain that LCPS and LCIC. Have positively significant impact on LM2. However, EXR and IFR have an insignificant impact on LM2. It means that 1% increase in LCPS and LCIC will lead to increase in LM2 by 31.15% and 62.18% The cointegrating equation and long-run model is given by model 4.4 and 4.5: $\begin{array}{c}EC{T}_{t-1}=LM{2}_{t-1}-0.0001EX{R}_{t-1}-0.0011IF{R}_{t-1}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-0.3115LCP{S}_{t-1}-0.6218LCI{C}_{t-1}-1.1965\end{array}$(4.4) $\begin{array}{c}LM{2}_{t-1}=0.0001EX{R}_{t-1}+0.0011IF{R}_{t-1}+0.3115LCP{S}_{t-1}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+0.6218LCI{C}_{t-1}+1.197\end{array}$(4.5) The results of VECM granger causality has reported in Table 9. The path of causality can be divided into short run and long run causality. The results show that LM2 causes LCIC (a financial development variable) in short-run only but LCIC causes LM2 both in short- and long-run. Thus, we can approximately say that bidirectional causality exists between “currency in circulation” and money demand (LM2). Also, LM2 causes LCPS (a financial development variable) both in short and long run while LCPS does not cause LM2 both in short and long run. So, unidirectional causality exists between money demand and “credits to private sectors”. Lastly, LCIC cause itself only in both short. The VECM residual diagnostic test was also applied to the empirical model to measure the adequacy of the specification of the model. As displayed in Table 10, the computed Residual Serial Lagrange multiplier (LM) test for AR[4] = 31.41 is statistically insignificant at conventional significance levels, which suggests that the disturbances are serially uncorrelated. 6) Variance Decomposition Approach Variance Decomposition Approach is an improved approach to Granger causality. It signposts the magnitude of projected error variance for a series accounted Note: Standard errors in ( ) & t-statistics in [], ^asignificant at 1%. Table 9. VECM granger causality analysis. Source: Authors computation. Note: ( ) Standard errors, [] t-statistics, ^a and ^b significant at 1% & respectively. Table 10. VEC residual serial correlation LM tests. Probs from chi-square with 25 df. for by innovations from the independent variables over different time-horizons. Table 11 has incorporated results of Variance Decomposition Approach (VDA). It presents the forecast error variance in LM2, LCPS and LCIC. The period (1) signifies the short run while down to period (10) it signifies long-run. Table 11. Variance decomposition approach. Cholesky Ordering: LM2 EXR IFR LCPS LCIC. According to the VDA results, 77.56% of LM2 is explaining by itself, 0.16% of LM2 is explaining by EXR, 0.34% of LM2 is explained by IFR, 18.75% of LM2 is explained by LCPS and 3.19% of LM2 is explained by LCIC. The major portion in explaining LM2 has LCPS. The ratio of LM2, EXR, IFR and LCIC to LCPS is 3.98%, 1.35%, 1.92% and 0.26% respectively. 92.50% of LCPS is explained by itself. Similarly, 81.44% of LCIC is explained by LM2, 0.05% of LCIC is explained by EXR, 0.83% of LCIC is explained by IFR, 2.69% of LCIC is explained by LCPS and 14.99% of LCIC is explained by itself. 5. Summary This research work examined the impact of financial development on money demand in Nigeria by means of ARDL approach. It examined the quarterly returns of M2, exchange rate (EXR), inflation rate (IFR), currency in credits to private sector (CPS) and circulation (CIC). The data span from 1991 to 2018. In the preliminary analysis, the descriptive statistics and distribution of all the series revealed conventional facts. Also, the time series plots and augmented dickey-fuller tests of the original series indicate non-stationarity thus necessitating appropriate transformation to achieve stationarity. In successive analysis, the study further employed regression model. The regression model’s residual is tested for Cointegration using Engle-Granger residual approach, the significances of the variable’s co-movement are checked by pairwise Granger Causality tests and ARDL and VECM are estimated in order to account for the short run and long run relationship among the variables. 5.1. Conclusions The objectives of the study have been basically accomplished. Engle-Granger residuals test and pairwise Granger Causality test have been applied to check cointegration among variables. Both tests have confirmed cointegration among variables. The ARDL and VECM confirm the long-run relation between money demand (M2) and financial development variables; credits to private sector and currency in circulation. ARDL models (short-run relationship) are estimated for exchange rate and inflation rate. Long-run (VECM) analysis has confirmed significance of financial development variables (CPS and CIC) with positive sign (see model 4.5). It means that money demand function is stable in long-run. VECM Granger causality was applied to check causality in short- and long-run. Results revealed that bidirectional causality exists between currency in circulation and money demand in both short and long run. Unidirectional causal relationship exists between credits to private sector and money demand in both short- and long-run. 5.2. Recommendations From the aforesaid, 1) Government should pay more attention on financial development i.e. credits to private sector and currency in circulation, in both short and long run to control money demand since it has statistically significant impact on money demand in both short-run and long-run. 2) We suggest a coordination of both fiscal and monetary policy. *MacKinnon (1996) one-sided p-values. *MacKinnon (1996) one-sided p-values. *Indicates lag order selected by the criterion, LR: sequential modified LR test statistic, FPE: Final prediction error, AIC: Akaike information criterion, SC: Schwarz information criterion, HQ: Hannan-Quinn information criterion (each test at 5% level).
{"url":"https://www.scirp.org/journal/paperinformation?paperid=100049","timestamp":"2024-11-14T05:22:26Z","content_type":"application/xhtml+xml","content_length":"171413","record_id":"<urn:uuid:227e14e8-b63e-47bb-95c6-e735d376d198>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00320.warc.gz"}
Pure Archives - A* Revision Forms of Vectors ● Vectors have both direction and magnitude ● May be written in bold, underlined or with an arrow above them to show their direction ● Magnitude-direction form: ○ Written in brackets ○ (magnitude, direction as angle) ○ Direction angle measured... Functions for Angles Between 0° and 90° Common Values of Sin, Cos and Tan Functions for Angles of Any Size ● Imagine triangle with hypotenuse h ● Opposite side = h sin θ ● Adjacent side = h cos θ The Angles of Trigonometric Functions ● Imagine a triangle inside a... Rational and Irrational ● Some numbers are irrational - cannot be expressed as fractions as decimals go on forever without pattern ● Square root of number that isn’t perfect square is also irrational ● However this is partly rational as you can write it as the square... Graphs ● Factorising quadratic expression gives information about the graph of the function ● X-intercepts found from numbers in each bracket ● Y-intercept is product of numbers in brackets ● If there is a single negative x in the factorised expression, the graph is... Graphs Sketching Polynomials ● Factorise the expression as much as possible to give the x-intercepts ● Set x equal to 0 and find the y-intercept ● Consider what happens as x tends towards ±∞ ● Make sure the graph is the right way up Finding the Equation of a... Reciprocal Graphs Points of Intersection ● Coordinates of POI satisfy the equations of both curves/lines ● Solve equations simultaneously to get coordinates of POI ● Mark on graph when sketching Proportional Relationships ● If y is directly proportional to x then the... Symbols ● < → less than ● > → greater than ● ≤ → less than or equal to ● ≥ → greater than or equal to ● Number line inequalities - filled-in circle means ‘or equal to’ ● Graphical inequalities - solid line means ‘or equal to’ Linear Inequalities ● Involves only... Exponential Functions ● In the form y = ax ● Often used to model growths → exponential growths take the form y = cakx ● Can also have exponential decays (take the form y = ca-kx ) Logarithms ● Logarithms (logs) are the inverse of exponentials ● logab = x ⇔ ax = b ●... Pascal’s Triangle You can expand an expression in the form (a + b)n using Pascal’s triangle ● Find the n th row of the triangle ● The coefficient of each term is multiplied by the corresponding number on the row of the triangle (ie the second term’s coefficient is... Surds and Indices Quadratic Functions Graphs and Transformations Exponentials and Logarithms Binomial Expansion
{"url":"https://astarrevision.com/maths_categories/pure/","timestamp":"2024-11-14T23:37:54Z","content_type":"text/html","content_length":"306393","record_id":"<urn:uuid:fbfc3219-8b3c-4ae8-9358-aa1d453f1687>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00046.warc.gz"}
Delaunay Triangulation Compute the Delaunay triangulation for a 2-D or 3-D set of points. For 2-D sets, the return value tri is a set of triangles which satisfies the Delaunay circum-circle criterion, i.e., no data point from [x, y] is within the circum-circle of the defining triangle. The set of triangles tri is a matrix of size [n, 3]. Each row defines a triangle and the three columns are the three vertices of the triangle. The value of tri(i,j) is an index into x and y for the location of the j-th vertex of the i-th triangle. For 3-D sets, the return value tetr is a set of tetrahedrons which satisfies the Delaunay circum-circle criterion, i.e., no data point from [x, y, z] is within the circum-circle of the defining tetrahedron. The set of tetrahedrons is a matrix of size [n, 4]. Each row defines a tetrahedron and the four columns are the four vertices of the tetrahedron. The value of tetr(i,j) is an index into x, y, z for the location of the j-th vertex of the i-th tetrahedron. The input x may also be a matrix with two or three columns where the first column contains x-data, the second y-data, and the optional third column contains z-data. An optional final argument, which must be a string or cell array of strings, contains options passed to the underlying qhull command. See the documentation for the Qhull library for details http: //www.qhull.org/html/qh-quick.htm#options. The default options are {"Qt", "Qbb", "Qc"}. If Qhull fails for 2-D input the triangulation is attempted again with the options {"Qt", "Qbb", "Qc", "Qz"} which may result in reduced accuracy. If options is not present or [] then the default arguments are used. Otherwise, options replaces the default argument list. To append user options to the defaults it is necessary to repeat the default arguments in options. Use a null string to pass no arguments. x = rand (1, 10); y = rand (1, 10); tri = delaunay (x, y); triplot (tri, x, y); hold on; plot (x, y, "r*"); axis ([0,1,0,1]); See also: delaunayn, convhull, voronoi, triplot, trimesh, tetramesh, trisurf.
{"url":"https://docs.octave.org/interpreter/Delaunay-Triangulation.html","timestamp":"2024-11-11T23:31:33Z","content_type":"text/html","content_length":"12918","record_id":"<urn:uuid:ea24b0ed-6f22-426f-86e5-4b7e1c8ca4f0>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00105.warc.gz"}
Surface Integrals of Vector Fields. Flux. II | JustToThePointSurface Integrals of Vector Fields. Flux. II I have yet to see any problem, however complicated, which, when you looked at it in the right way, did not become still more complicated, Poul Anderson. A vector field is an assignment of a vector $\vec{F}$ to each point (x, y) in a space, i.e., $\vec{F} = M\vec{i}+N\vec{j}$ where M and N are functions of x and y. A vector field on a plane can be visualized as a collection of arrows, each attached to a point on the plane. These arrows represent vectors with specific magnitudes and directions. Work is defined as the energy transferred when a force acts on an object and displaces it along a path. In the context of vector fields, we calculate the work done by a force field along a curve or trajectory C using a line integral. The work done by a force field $\vec{F}$ along a curve C is: W = $\int_{C} \vec{F}·d\vec{r} = \int_{C} Mdx + Ndy = \int_{C} \vec{F}·\hat{\mathbf{T}}ds$, where $\ hat{\mathbf{T}}$ is the unit tangent vector. A vector field is conservative if there exist a scalar function such that $\vec{F}$ = ∇f (the vector field is its gradient). This scalar function is known or referred to as the potential function associated with the vector field. Theorem. Fundamental theorem of calculus for line integral. If $\vec{F}$ is a conservative vector field in a simply connected region of space (i.e., a region with no holes), and if f is a scalar potential function for $\vec{F}$ in that region, then $\int_{C} \vec{F}·d\vec{r} = \int_{C} ∇f·d\vec{r} = f(P_1)-f(P_0)$ where P[0] and P[1] are the initial and final points of the curve C, Equivalent Properties of Conservative Vector Fields 1. Conservative force. A force $\vec{F}$ is considered conservative if the work done by the force around any closed curve C is zero. Mathematically, this is expressed as $\int_{C} \vec{F}·d\vec{r} = 2. Path independence. A force field is path-independent, meaning the work done by the force in moving an object from one point to another is the same, regardless of the path taken between the two 3. Gradient Field. A vector field $\vec{F}$ is a gradient field if it can be expressed as the gradient of a scalar potential function f. In mathematical terms, this means $\vec{F} = ∇f$, where f is a scalar function and the vector field $\vec{F}$ has components $\vec{F} = ⟨M, N⟩ = ⟨\frac{∂f}{∂x}, \frac{∂f}{∂y}⟩ = ⟨f_x, f_y⟩$. Here, f is the potential function associated with the vector field $\vec{F}$. The lineal integral of $\vec{F}$ along a path C measures the work done by the vector field in moving an object along the path C. If $\vec{F}$ is a gradient field, then: $\int_{C} \vec{F}·d\vec{r} = f(P_1)-f(P_0).$ 4. Exact differential. In the context of differential forms, a differential expression Mdx+Ndy is called an exact differential if there exist a scalar function f(x, y) such that df = $\frac{∂f}{∂x} dx+\frac{∂f}{∂y}dy$ = Mdx + Ndy. This implies that the vector field $\vec{F} = ⟨M, N⟩$ is conservative, and there exists a potential function f such that $\vec{F} = ∇f$. If a differential is exact, then the line integral of $\vec{F}$ over any path C can be evaluated by simply finding the difference in the potential function values at the endpoints of the path. These four properties —conservative force, path independence, gradient field, and exact differential— are different perspectives of the same fundamental concept: the conservativeness of a vector Criterion for a Conservative Vector Field The criterion for checking whether a vector field $\vec{F}$ is conservative can be summarized as follows: If $\frac{∂M}{∂y} = \frac{∂N}{∂x}$, and if the functions M and N have continuous first partial derivatives across the entire domain D, and the domain D is open and simply connected, then $\vec{F}$ is a gradient field, meaning it is conservative. Green’s Theorem. If C is a positively oriented (counterclockwise) simple closed curve enclosing a region R, and $\vec{F} = ⟨M(x, y), N(x,y)$ is a vector field that is defined and has continuous partial derivatives on an open region containing R, then Green's Theorem states that: $\oint_C \vec{F} \cdot{} d\vec{r} = \iint_R curl(\vec{F}) dA ↭ \oint_C Mdx + N dy = \iint_R (N_x-M_y)dA = \iint_R (\frac{∂N}{∂x}-\frac{∂M}{∂y})dA$ where • $\oint_C \vec{F} \cdot{} d\vec{r}$ represents the work or line integral of the vector field $\vec{F}$ around the curve C. • $\iint_R (\frac{∂N}{∂x}-\frac{∂M}{∂y})dA$ represents the double integral over the region R, with the integrand being the difference between the partial derivatives of N and M. Green’s Theorem for Flux. If C is a positively oriented (counterclockwise), simple closed curve that encloses a region R and if $\vec{F} = ⟨P, Q⟩$ is a continuously differentiable vector field defined on an open region that contains R, then the flux of $\vec{F}$ across C is equal to the double integral of the divergence of $\vec{F}$ over R. Mathematically, this is expressed as: $\oint_C \ vec{F}·\vec{n}d\vec{s} = \int \int_{R} div \vec{F}dA$ where the divergence of $\vec{F}$ is given by $div \vec{F} = P_x + Q_y = \frac{∂P}{∂x}+\frac{∂Q}{∂y}$. Example with a Non-Simply Connected Region Consider a vector field $\vec{F}$ illustrated in Figure A. C′ is an outer boundary, C′′ is an inner boundary enclosing a region where $\vec{F}$ might not be defined at certain points within it, and R is the region between these two boundaries C′ and C′′. We aim to compute the total line integral of the vector field $\vec{F}$ around the composite curve C, which is formed by combining both C′ and C′′. he total line integral around C involves considering both the outer boundary C′ and the inner boundary C′′. Green’s Theorem is a powerful tool that relates the line integral around a simple, closed curve C to a double integral over the region R enclosed by C. However, in a non-simply connected region, where C is not a single simple curve but consists of multiple curves (like C′ and C′′ ), Green’s Theorem must be applied carefully. Apply Green’s Theorem to the outer and inner boundaries C′ and C′′. To find the total line integral around the composite curve C, we subtract the integral around C′′ because C′′ is taken in the clockwise direction (which is opposite to the counterclockwise direction). $\oint_{C} \vec{F}d\vec{r} = \oint_{C’} \vec{F}d\vec{r}-\oint_{C’’} \vec{F}d\vec{r} = \int \int_R curl \vec{F}dA =$[In our previous case, Counterexample. Consider the vector field $\vec{F} = \frac{-y\vec{i}+x\vec{j}}{x^2+y^2}, curl(\vec{F}) = \frac{∂}{∂x}(\frac{x}{x^2+y^2})-\frac{∂}{∂y}(\frac{-y}{x^2+y^2}) = 0$] 0. Addressing Regions Where $\vec{F}$ is Not Defined In some cases the vector field $\vec{F}$ is not defined at certain points within the region R, such as within the inner boundary C’’. This could occur if there is a singularity or a point where $\vec {F}$ becomes undefined, e.g., the origin. To handle this situation, we modify our approach: Composite curve. We define C as the combination if C’ and C’’. The total line integral around C can then be expressed as the difference between the line integrals around C′ and C′′. Well-defined Regions. Even if $\vec{F}$ is not defined within the hole enclosed by C′′, Green’s Theorem can still be applied to the well-defined region between C′ and C′′, R, Figure B. In other words,we can create a curve that will enclose this region counterclockwise and is well defined, hence the total line integral $\oint_C \vec{F}·\hat{\mathbf{T}}·ds = \oint_{C’}\vec{F}d\vec{r} -\oint_ {C’’}\vec{F}d\vec{r}$[C’’ is clockwise, and the interior blue segments cancel out] = $\int \int_R curl \vec{F}dA$ Definition and Importance of Simply Connected Regions in Green’s Theorem Definition. A region R in the plane is simply connected if, for any closed curve C within R, the entire interior of C is also contained within R. In simpler terms, a simply connected region does not have any holes or gaps; it’s a continuous and unbroken area. Imagine drawing any loop or curve within the region R. If you can always shrink that loop to a single point without leaving the region, then R is simply connected. For example, in Figure C, C[1] is simply connected because any closed curve you draw within C[1] can be contracted to a point within the region. On the other hand, C[2] is not simply connected because it contains a hole (indicated by the red curve). Why Simply Connected Regions Matter Understanding whether a region is simply connected is crucial when applying Green’s Theorem. Green’s Theorem is a fundamental result in vector calculus that relates the line integral around a closed curve C to a double integral over the region R enclosed by C. Green’s Theorem states: $\oint_C \vec{F}·\hat{\mathbf{T}}·ds = \int \int_R curl \vec{F}dA$ where • $\vec{F}$ is a vector field defined on R. • $\hat{\mathbf{T}}$ is the unit tangent vector to the curve C. • curl $\vec{F}$ represents the curl of the vector field $\vec{F}$, which measures the tendency of the field to rotate around a point. More importantly, this holds true if the region R is simply connected and the vector field $\vec{F}$ is well-defined and differentiable throughout R. This ensures that the interior of any curve C does not encounter any discontinuities or undefined points in $\vec{F}$. Criteria for Conservative Fields A vector field $\vec{F}$ is called conservative (or a gradient field) if it can be expressed as the gradient of some scalar potential function, f. In other words, $\vec{F} = ∇f$ where ∇f denotes the gradient of f. For a vector field to be conservative, two main conditions must be satisfied: 1. The curl of $\vec{F}$ must be zero everywhere in the region R. Mathematically, $curl \vec{F} = 0$. This condition ensures that there is no rotational component to the field. 2. The region where $\vec{F}$ is defined must be simply connected. When these conditions are met, the line integral of $\vec{F}$ around any closed curve C in R is zero: $\vec{F}$ is conservative, $\oint_C \vec{F}·\hat{\mathbf{T}}·ds = \int \int_R curl\vec{F}dA = \ int \int_R 0·dA$ = 0. This result is significant because it tells us that the work done by a conservative force field around any closed loop is zero. In practical terms, if you move a particle in a conservative force field, the energy you spend moving it along a closed path will be recovered when you return to the starting point. Solved mixed exercises • Calculate the double integral $\int \int_R (1-r^2)dA$, where R is the upper half of the unit circle. The unit circle is defined by x^2+y^2 = 1, and “upper half” means that we are considering the part of the circle where y ≥ 0. Set up the integral in polar coordinates. In polar coordinates: x = rcos(θ), y = rsin(θ), and x^2 + y^2 = r^2, the differential area element dA in polar coordinates is rdrdθ. The transformation to polar coordinates simplifies the bounds of integration and the integrand, making the problem more manageable. The region R is the upper half of the unit circle, so θ ranges from 0 to π (since θ = 0 corresponds to the positive x-axis and θ = π corresponds to the negative x-axis, covering the upper semicircle) and r ranges from 0 to 1 (the equation of the unit circle x^2+y^2=1, becomes r^2=1 ⇒ r = ± 1). Thus, the integral in polar coordinates becomes: $\int \int_R (1-r^2)dA = \int_{0}^{π}\int_{0}^{1} (1-r^2)rdrdθ$ =[Simplify the integrand] $\int_{0}^{π}\int_{0}^{1} (r-r^3)drdθ.$ Calculate the inner integral: $\int_{0}^{1} (r-r^3)dr = \frac{r^2}{2}-\frac{r^4}{4}\bigg|_{0}^{1} = (\frac{1}{2}-\frac{1}{4}-(0-0)) = \frac{1}{4}$. Calculate the outer integral: $\int_{0}^{1} \frac{1}{4}dθ = \frac{1}{4}θ\bigg|_{0}^{1} = \frac{1}{4}π-0 = \frac{π}{4}$ • Find the volume of the solid that lies between the paraboloid z = 4 -x^2 -y^2 and the xy-plane (where z = 0) (Figure A). Visualizing the Problem. The paraboloid z = 4 -x^2 -y^2 opens downward, and its vertex is at the point (0, 0, 4) on the z-axis. The xy-plane is where z = 0, which forms the base of the solid. We are interested in the region above this plane and below the surface of the paraboloid. Set up the integral in polar coordinates. The equation of the paraboloid z = 4 -x^2 -y^2 is given in Cartesian coordinates. To simplify the integration, we convert this equation into polar coordinates. In polar coordinates: x = rcos(θ), y = rsin(θ), and the paraboloid equation becomes z = 4 - (x^2 + y^2) = 4 - r^2, the differential area element dA in polar coordinates is rdrdθ. Determine the region of integration: The paraboloid intersects the xy-plane when z = 0, $4 -r^2 = 0⇒r^2 = 4 ⇒ r = 2.$ So, r ranges from 0 to 2 (Another way of thinking about it is this: The region of integration R corresponds to the projection of the paraboloid onto the xy-plane. This projection is a circle, which we find by setting z = 0 in the equation of the paraboloid) and θ ranges from 0 to Set Up the Volume Integral. The volume V under the paraboloid and above the xy-plane can be calculated using a double integral over the region R: $V = \int \int_R zdA = \int \int_R (4-r^2)dA = \int_ {0}^{2π}\int_{0}^{2} (4-r^2)rdrdθ$ =[Simplify the integrand] $\int_{0}^{2π}\int_{0}^{2} (4r-r^3)drdθ.$ Evaluate the inner integral with respect to r: $\int_{0}^{2} (4r-r^3)dr = 2r^2-\frac{r^4}{4}\bigg|_{0}^{2} = (8-4)-(0-0)=4.$ Evaluate the outer integral with respect to θ: $\int_{0}^{2π} 4dθ = 4θ\bigg|_{0}^{2π} = 8π.$ • Calculate the circulation of the vector field $\vec{F} = ⟨-y, x⟩$ along a circle of radius 2 centered at the origin and oriented counterclockwise. The circulation of a vector field around a closed curve C is given by the line integral: $\oint_C \vec{F}·\hat{\mathbf{T}}ds = \oint_C \vec{F}·d\vec{r} = \oint_C Mdx + Ndy.$ Here, M = -y and N = x. Parameterize the Circle. The circle of radius 2 centered at the origin can be parameterized using a vector function: $\vec{r}(t) = ⟨2cos(t), 2sin(t)⟩, 0 ≤ t ≤ 2π.$ This parameterization describes a point $\vec{r}(t)$ on the circle as t varies from 0 to 2π, tracing out the entire circle in the counterclockwise direction. Compute the Components of $\vec{F}$ and the differentials dx and dy. Given $\vec{F} = ⟨-y, x⟩, M = -y =[y=2sin(t)] -2sin(t), N = x = 2cos(t)⇒ dx = -2sin(t)dt, dy = 2cos(t)dt.$ Set Up the Line Integral. The line integral for circulation around the curve C is given by: $\oint_C \vec{F}·\hat{\mathbf{T}}ds = \oint_C \vec{F}·d\vec{r} = \oint_C Mdx + Ndy =[\text{Substituting the values of M, N, dx, and dy into this expression}] \int_{0}^{2π} (-2sin(t))(-2sin(t)dt)+(2cos(t))(2cos(t))dt = \int_{0}^{2π} (4sin^2(t)+4cos^2(t))dt =[\text{Notice that } sin^2(t)+cos^2(t) = 1] \int_ {0}^{2π} 4dt = 4t\bigg|_{0}^{2π} = 8π - 0 = 8π$ • How we can set it up to swap or exchange the order of integration, $\int_{0}^{4}\int_{\sqrt{y}}^{2} \sqrt{x^3+1}dxdy$ Determine the region of integration. Before we can reverse the order of integration, we need to understand the region over which we’re integrating. The given limits for y are from 0 to 4. For a fixed value of y, x ranges from $\sqrt{y}$ to 2. These limits describe a region in the xy-plane. Sketch the region (Figure B). We get the region bounded by the curve y = x^2 from x = 0 to x = 2 and the line x = 2. Alternatively, the region is bound by 0 ≤ x ≤ 2 and for a fixed x, 0 ≤ y ≤ x^2 (y varies from 0 to x^2). Set up the integral with reversed limits: $\int_{0}^{4}\int_{\sqrt{y}}^{2} \sqrt{x^3+1}dxdy = \int_{0}^{2}\int_{0}^{x^2} \sqrt{x^3+1}dydx$ Evaluate the inner integrate: $\int_{0}^{x^2} \sqrt{x^3+1}dy = y\sqrt{x^3+1}\bigg|_{0}^{x^2} = x^2\sqrt{x^3+1}$ Evaluate the outer integral: $\int_{0}^{2} x^2\sqrt{x^3+1}dx =$[To evaluate this integral, we use a substitution. Let: u = x^3+1, du = 3x^2dx ⇒x^2dx = ^1⁄[3]du] $\frac{1}{3}\int_{0}^{2} \sqrt{u}du = \frac{1}{3}·\frac{2}{3}u^{\frac{3}{2}} = \frac{2}{9}(x^3+1)^{\frac{3}{2}}\bigg|_{0}^{2} = \frac{2}{9}(9^{\frac{3}{2}}-1^{\frac{3}{2}}) = \frac{2}{9}(27-1) = \frac{52}{9}$ • How we can set it up to swap or exchange the he order of integration for the double integral $\int_{0}^{1}\int_{x}^{2x} f(x,y)dydx$ (see Figure D). 1. Understanding the region of integration. Before we can reverse the order of integration, we need to understand the region over which we’re integrating. The inner integral $\int_{x}^{2x} f(x,y)dy$ indicates that for a fixed x, y ranges from x to 2x. The outer integral $\int_{0}^{1} f(x,y)dx$ indicated that x ranges from 0 to 1. 2. Sketch the region: The lower bound is y = x, a line with slope 1 passing through the origin. The upper bound is y = 2x, a line with slope 2 passing through the origin. x ranges from 0 to 1. 3. Set up the new integral with reversed limits: The new region of integration is: y ranges from 0 to 2, x ranges from y/2 to y or y/2 to 1, depending on y ([0, 1] or [1, 2]). $\int_{0}^{1}\int_{x}^ {2x} f(x,y)dydx = \int_{0}^{1}\int_{\frac{y}{2}}^{y} f(x,y)dxdy + \int_{1}^{2}\int_{\frac{y}{2}}^{1} f(x,y)dxdy$ • Find the volume of the tetrahedron in the first octant bounded by 4x +2y + z = 8 (Figure i) The problem asks us to find the volume of a tetrahedron bounded by the plane 4x +2y +z =8 and the coordinate planes (the xy-, xz-, and yz-planes). This tetrahedron is in the first octant, meaning all coordinates (x, y, z) are non-negative. Understanding the Geometry. To understand the shape of the tetrahedron, let’s determine where the plane intersects the coordinate axes: • x-intercept: Set y = 0 and z = 0 in the plane equation 4x +2y +z = 8. This gives 4x = 8, so x = 2. • y-intercept: Set x = 0 and z = 0 in the plane equation 4x +2y +z = 8. This gives 2y = 8, so y = 4. • z-intercept: Set x = 0 and y = 0 in the plane equation 4x + 2y +z =8. This gives z = 8. Thus, the tetrahedron is bounded by the points (2, 0, 0), (0, 4, 0), (0, 0, 8), and the origin (0, 0, 0). Setting Up the Triple Integral in Cartesian Coordinates. The volume V can be calculated using a triple integral over the region defined by the tetrahedron. We integrate the function f(x, y, z) = 1 over this region because the volume is simply the integral of 1 over the region. V = $\int_{0}^{2}\int_{0}^{-2x+4} \int_{0}^{8-4x-2y} 1·dzdydx$ and a double integral by integrating over the projection of the tetrahedron onto the xy-plane. 1. z ranges from 0 to 8−4x−2y since the plane equation can be solved for z as z = 8 -4x -2y. 2. y ranges from 0 to 4 −2x. When z = 0, solving 4x +2y = 8 gives y = 4 −2x. 3. x ranges from 0 to 2 since when y = 0 and z = 0, solving 4x =8 gives x = 2. Simplifying the Integral to a Double Integral. The innermost integral with respect to z is straightforward since we are integrating a constant 1: $\int_{0}^{8-4x-2y} 1·dz = (8-4x-2y) -0 = 8-4x-2y$ Set up the integral in Cartesian coordinates: $V = \int_{0}^{2} \int_{0}^{-2x +4} (8 -4x-2y)dydx$ Compute the inner integral with respect to y: $\int_{0}^{-2x +4} (8 -4x-2y)dy = 8y -4xy -y^2\bigg|_{0}^{-2x +4} = 8(-2x+4)-4x(-2x+4)-(-2x+4)^2 = -16x +32 +8x^2 -16x -(4x^2-16x+16) = 4x^2-16x+16$ Compute the outer integral with respect to x: $\int_{0}^{2} (4x^2-16x+16)dx = \frac{4}{3}x^3-8x^2+16x\bigg|_{0}^{2} = \frac{4}{3}8-8·4+16·2=\frac{32}{3}-32+32 = \frac{32}{3}$. Conclusion: The volume of the tetrahedron in the first octant bounded by the plane 4x +2y +z = 8 is $\frac{32}{3}$ cubic units. This content is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License and is based on MIT OpenCourseWare [18.01 Single Variable Calculus, Fall 2007]. 1. NPTEL-NOC IITM, Introduction to Galois Theory. 2. Algebra, Second Edition, by Michael Artin. 3. LibreTexts, Calculus and Calculus 3e (Apex). Abstract and Geometric Algebra, Abstract Algebra: Theory and Applications (Judson). 4. Field and Galois Theory, by Patrick Morandi. Springer. 5. Michael Penn, and MathMajor. 6. Contemporary Abstract Algebra, Joseph, A. Gallian. 7. YouTube’s Andrew Misseldine: Calculus. College Algebra and Abstract Algebra. 8. MIT OpenCourseWare 18.01 Single Variable Calculus, Fall 2007 and 18.02 Multivariable Calculus, Fall 2007. 9. Calculus Early Transcendentals: Differential & Multi-Variable Calculus for Social Sciences.
{"url":"https://justtothepoint.com/calculus/flux2/","timestamp":"2024-11-14T04:38:43Z","content_type":"text/html","content_length":"38244","record_id":"<urn:uuid:a7ddca6f-aeb4-4c7a-aba8-e34806275b16>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00101.warc.gz"}
Conversion Table Decimal to Hex Table Our decimal to hex table will help you convert basic decimal numbers hexadecimal numbers. Decimal to Hexadecimal Conversion Table Decimal Hex Decimal Hex 10 a 11 b 12 c 13 d 14 e 15 f How to use the Decimal to Hexadecimal Table? Our free decimal-to-hex table makes it easy to convert decimal numbers to hexadecimal. Simply find your decimal number and the table will reveal the corresponding hexadecimal number. The table will highlight as you hover over each row to make sure you get your binary to hexadecimal conversion correct. Why does the Binary to Hexadecimal Table only go up to 15? This simple binary to hexadecimal table only reveals the first 15 numbers of the hexadecimal number system. This is a super useful tool for learning basic hex. In the hex number system, there are 16 unique characters. These include the numbers 0 through 9 and the numbers A through to F. To represent the number 16 in hexadecimal you can write it as 10. If you want to convert large decimal numbers to hexadecimal try our extended decimal to hex table here. Can Hexadecimal Represent Irrational Numbers? Yes, as well as being used to represent rational numbers. Hexadecimal can be used to represent irrational numbers. Some common irrational numbers in hexadecimal include: • Pi or π (3.141592….) in hexadecimal is 3.243F6A8…. • Phi or φ (1.618033988….) in hexadecimal is 1.9E3779B9…. • The natural logarithm or e (2.7182818…) in hexadecimal is 2.B7E151628…. • Square Root of 2 or √2 (1.41421….) in hexadecimal is 1.6A09E66…. Where does the Word Hexadecimal Come From? The earliest reference to the current form of the hexadecimal system and the word hexadecimal is back in the 1960s replacing earlier proposals of the system called “sexadecimal”. This was later renamed to hexadecimal which is a mix of the Greek (“hex”) for six with Latin (“decimal”). This adoption came with many computing advances of the 1960s including IBM’s Fortran programming language. Can Addition, Subtraction, Multiplication and Division be done with Hexadecimal? Yes, all the above mathematical operations can be performed in the hexadecimal (or any) number systems. We provide some free useful calculators to help you perform and check these operations that you can find here:
{"url":"https://binarytables.com/tables/decimal-to-hex","timestamp":"2024-11-02T18:40:41Z","content_type":"text/html","content_length":"15745","record_id":"<urn:uuid:6f772079-7796-4542-842d-fdffc7d1f58e>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00213.warc.gz"}
How the VA Does Math - Vet DefenderHow the VA Does Math VA Math is a strange thing. We have so many veterans who simply try and add up 50% + 30% + 20% and think it should be rated at 100%. That seems logical but that’s not how VA math works. VA Math is the math used to combine the Military Disability Ratings of multiple conditions to give a veteran a single overall, or “combined”, rating. In other words, if a person has more than one condition that is rated for VA Disability, then each of the ratings are combined (note that the key word here is “combined” not “added”) together using VA Math to give one overall rating. This single rating is then used to determine the exact type and amount of VA Disability Benefits the veteran receives. So here’s how VA Math works. Each condition is a percentage of the disability of the service member. When combined together, however, each percentage is not a percentage of the entire service member but a percentage of what is left after other percentages have been subtracted. Got that? No? Well here’s an example: If an entire body is equal to 100%. Let’s say that a veteran has three rated conditions. The first is a knee injury that is rated 30%. The second is a shoulder injury rated 20%. The last is a back injury rated 10%. Instinct would assume that the combined rating would be 60% (30 + 20 + 10 = 60). The VA starts with the largest rating, 30%. This rating is then subtracted from the total body rating of 100%. Of the total body, now only 70% remains. So instead of simply subtracting 20 for the shoulder’s 20%, you can only subtract 20% of the 70 that is left, which is 14 (0.2 x 70 = 14). 70 minus 14 is 56. Now, since only 14 was subtracted from the total body, only 14 is added to the total combined rating. Now for the last 10%. Again we can only subtract 10% of what is left of the total body. Thus, 10% of 56 is 5.6 (0.1 x 56 = 5.6). 56 minus 5.6 is 50.4. And again, since only 5.6 was subtracted from the total body, only 5.6 is added to the combined rating. So far, the veteran’s rating is 30% + 14% + 5.6% = 49.6%. Once all the conditions are counted, then the total combined rating is rounded to the nearest 10. 49.6%, therefore, equals 50% total disability. Here’s a close rule of thumb. This is not an accurate way, but it usually comes close. Take the highest rating (30% in this example), then add ½ of the total of the other disabilities (20% +10% = 30% x ½ = 15%). Then add that to the original 30% and round up to the nearest whole number. Viola! Clear as mud! If you need more help contact the Vet Defender. You can set your own appointment by clicking on this link: Schedule an Appointment.
{"url":"https://vetdefender.com/how-the-va-does-math/","timestamp":"2024-11-08T20:57:59Z","content_type":"text/html","content_length":"72276","record_id":"<urn:uuid:277ca4b2-10fa-464e-bad9-972ceb89f837>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00784.warc.gz"}
Forecasting the COVID-19 Epidemic for the U.S. Special Report May 8, 2020 Updated Estimates and Forecasts June 10, 2020: We have updated estimates and forecasts of infection and mortality rates in the United States and in the constituent states of the Fifth District. The current update is based on data up to June 6 and now also includes mortality projections for the latter. We reestimate the model with the recent data so that the projections reflect both the influence of revised model estimates and more data. A full set of graphs for U.S. and Fifth District forecasts is available online. For the entire United States, we now project median fatalities of 184,000 with a 95 percent range of 175,000 to 193,000 by early September. The corresponding number of total infections is 3.1 million. Even three months out, we expect the United States to have 10,000 new cases and almost 600 deaths daily. Our forecast intervals have shifted slightly upwards, possibly on account of relaxed social distancing measures in many parts of the United States since early May. Otherwise, the model performs exceedingly well in that projections for daily infections and fatalities become more precise and data realizations continue to lie within error bands. Over the same horizon, we project 3,400 cumulative fatalities in Virginia, 2,800 in North Carolina, 5,500 in Maryland, and 1,200 in South Carolina, with the District of Columbia and West Virginia below the 1,000 mark, although the per capita rates in the former are exceedingly high. Although forecast intervals have tightened, the data flow for North and South Carolina has started to fall outside of the error bands from the previous forecast, which led to a considerable revision in the estimates. The pattern of new infections and deaths for these two states suggests the effect of early relaxation of lockdown measures, although the per capita rates are still comparatively low. We are developing a richer model for all states that allows forecasts to adjust to variables including the amount of social distancing. Preliminary results and further discussion are here. A key weapon in fighting the COVID-19 epidemic is understanding how the contagion has spread through the U.S. population and how its spread is likely to evolve in the future. Based on such knowledge, public health measures can be devised, whether they are social distancing recommendations or more stringent lockdown procedures. Understanding of the disease's path can be gained using theoretical or statistical modelling techniques that allow researchers to forecast its future course, which can then be used as a basis for decisions about further public health measures. The coronavirus behind the COVID-19 pandemic is a novel contagion that is highly infectious, has a long incubation period, and can transmit asymptomatically, that is, without an infected person showing any signs of infection or disease. At the same time, this also means that data on infections and even deaths caused by the disease are difficult to collect, resulting in time lags between infections, possible fatalities, and data availability. In addition, the coronavirus is novel enough that previous experiences, such as the SARS pandemic of 2003, may not be immediately applicable. A particularly vexing feature of many attempts to project the course of the pandemic in the U.S. and across the world is that projections have changed frequently, often in significant ways. This is true of forecasting models that rely on strong theoretical relationships, such as the Imperial College model that informed the U.K. government's early response to the crisis, but also of the statistical model developed by the Institute for Health Metrics and Evaluation at the University of Washington that was referenced in the U.S. government's response. This aspect of forecasting the course of the pandemic is problematic insofar as frequent revisions may cast doubt on the validity of the model. Macroeconomic forecasters are familiar with this challenge since the economy is buffeted by shocks, the data are subject to measurement errors, and the underlying behavior of the variables may change over the forecast horizon because of policy interventions. All of these aspects are present in the current situation when attempting to forecast the path of the pandemic. However, there is the danger that policymakers and the public lose trust in the researchers' and forecasters' ability to capture and describe the disease. In such a forecasting environment, the source of uncertainty needs to be carefully communicated and taken into account during the decision-making process. Moreover, forecasters should adapt to the changing nature of the data and where forecasts went wrong. In this article, we describe a statistical model that we use to estimate and forecast the path of infections and deaths caused by COVID-19 in the U.S. We focus on documenting the uncertainty surrounding the estimates and projections, as our approach is not immune to the issues raised above. However, we argue that understanding the source of uncertainty is an important step in making public health decisions. The Epidemic Forecasting Model and Data We have developed a statistical model for estimating and forecasting the number of infections and deaths over the course of the pandemic. (Documentation of the model and the sources can be found online.) Our model is almost entirely data-driven, in that it tries to match the underlying time series properties of the data at hand in a flexible manner while at the same time relying on guidance from epidemiological insights about how an epidemic runs its course. The time path of the number of infections during an epidemic follows a typical pattern. When a pathogen enters a population that is susceptible to infection, the number of infected cases is initially low. However, the growth rate of new infections is high and tends to rise sharply at an exponential rate because each infected person creates a chain of new infections. At some point, however, the pathogen runs out of susceptible hosts, either because they are already infected, are immune, or they are simply not physically present due to health policies such as social distancing. At this inflection point, the growth rate of infections falls until it eventually declines to zero. In our empirical model, we attempt to replicate these broad patterns of an epidemic. We do so by specifying a flexible functional form that describes the path of infections over time as depending on the current and lagged levels of the number of infections. The model is loosely parameterized, whereby the parameters are estimated to provide best fit of the model specification to the available data. In contrast to theoretical epidemiological models, our specification has more leeway to go where the data tell it to and is not constrained by precise theoretical relationships that may be specified incorrectly. Identification of the model parameters is based on the growth rate and changes in the growth rate of infections. Early in an epidemic, the data typically show exponential growth, rapid and increasing, whereas after some time, as the stock of susceptible hosts starts getting smaller, the rise in the growth rate decelerates until it reaches a peak. Afterward, the growth rate of new infections declines. These three distinct phases of an epidemic can be associated with distinct parameters in our model, which are thus identified from the data flow. This is also where a problematic aspect of any epidemiological model lies. At first, data are sparse, but the underlying course of the infection is such that it should be easy to forecast. Put differently, the epidemic develops a very strong trend with exponential growth. Simply extrapolating from this growth trend would produce good forecasts for a while — until the spread starts slowing down and gravitates toward an inflection point. While epidemiological models based on the course of previous epidemics confirm that there will be an inflection point, estimates from the sparse initial data are highly uncertain. Moreover, theoretical and statistical epidemiological models are sensitive to small variations in parameters. It is in this sense that model estimates and forecasts should be interpreted with much caution at the beginning of the pandemic, and uncertainty at this stage should explicitly be taken into account when making public health policy decisions. In addition to modeling infections, we also consider the mortality rate. Fundamentally, the number of deaths is a function of the number of infections. Not all infections are fatal, and an observed death is the outcome of a process that can vary over time. We thus assume that the number of deaths on any given day is proportional to the average number of observed infections over a time period. This captures the idea that there is a minimum number of days that pass after an initial infection can result in a fatality. A key aspect of our modeling approach is that we explicitly capture the uncertainty of the model estimates and, perhaps more importantly, the uncertainty inherent in the forecast. The precision of a forecast, or how tightly possible alternative forecast paths are concentrated around the most plausible path, is generally affected by two factors: first, the uncertainty of the model estimates in terms of overall fit and parameter estimates since no statistical model fits precisely; and second, by the extent to which the model may be subject to further disturbances or imprecision in data collection in the future. We take both aspects into account to give a sense of how uncertain forecasts in a pandemic truly are, especially when the data flow is sparse at the beginning. We fit our models to observed data on daily new cases of infections and deaths. The estimated models are then used to forecast the future paths of the respective variables, whereby we take into account all potential sources of uncertainty. We collect data from a variety of publicly available sources. The estimates are performed on these data up to and including May 3, 2020. Estimates and Forecasts of the Number of Infections Figure 1 shows the cumulative number of cases, i.e., infections, in the U.S. and the daily count of new cases as a percentage of the population. The grey line in the graph represents the actual number of measured new infections, while the orange lines are drawn from the estimated model. We show the best-fitting line and a 95 percent confidence region around these estimates. In other words, the estimates represent our assessment of the trend in number of infections as seen through the lens of the empirical model. They differ from the actual numbers because the latter are subject to various errors, such as simple data entry mistakes, different reporting guidelines and dates across the 50 states, and other idiosyncratic variations in how the disease progresses. We estimate that the peak in the number of new infections was reached by mid-April, around April 12. After this date, the number of new infections has been falling slowly but steadily. In terms of the cumulative case numbers, this suggests that the U.S. is already past the inflection point and that measures to suppress the spread of the pandemic have been working to some degree. However, since mid-April the incoming data on new cases have become increasingly volatile. This appears largely driven by the fact that infections have spread beyond a few clusters with very high case numbers, specifically New York City, to a wider swath of states. At the same time, the volatility does not seem to affect the median estimated path as it shows a general downward trend from the estimated Given our last data point on May 3, we project the time path of new infections and cumulative cases forward until the start of August. We show the median forecast in Figure 1. The uncertainty region prior to May 3 captures the estimation uncertainty of the fitted model, while the uncertainty region after May 3 includes uncertainty from disturbances in the data. We note that uncertainty about new case numbers widens immediately, which reflects both the uncertainty about the dynamics of the pandemic and the uncertainty inherent in the data process. More specifically, wide uncertainty bands and volatile data suggest that one should consider the broader trend rather than extrapolate too much from a few recent data points. Our forecasted range of new infections includes the estimated peak, which indicates that the U.S. is not out of the woods yet and that it may, in fact, have reached a plateau. As the pandemic runs its course, the degree of uncertainty declines, however, and the incidence of new cases becomes more precisely estimated as the infection rate moves toward zero. The cumulative case numbers in Figure 1 are projected to grow over the next several months, albeit at a declining rate. By the start of August, we project 0.71 percent of the U.S. population will be infected, with a range of 0.66 percent to 0.78 percent. In Figure 2, we take a closer look at how the passage of time and the availability of more data have affected our projections. We estimate our model for data that were available, respectively, 14 and 28 days ago, before the current estimation date of May 3. The projections as of April 5 are shown in green, those as of April 19 in red, and the current estimate is in blue. We only show the respective 95 percent confidence regions. Overall, the estimates for the last two samples are contained in the uncertainty region of the April 5 sample. As more information became available, estimates of the underlying pattern in the infection data became more precise and the model developed a better sense of where the peak of new infections, thus the inflection point of the pandemic, were. Consequently, the projections became more precise. The same pattern can be seen for the April 19 and the May 3 sample. The latter is somewhat smaller, but it is also shifted upward for both cumulative and new cases. That is, the data flow over these 14 days led to improved precision in the forecast, but also in a revision of the projected path of the epidemic. We can tie this pattern to the fact that observed new infections appear to have plateaued over the last few days. Mortality Forecasts for the U.S. Figure 3 shows our estimates of the mortality model described above and our projections for cumulative deaths through the end of July. These projections depend on our models for both the number of cases and the mortality rate, allowing for estimation uncertainty and disturbances in both models. Our median projection of total fatalities by the start of August is 159,000, with a range of 140,000 to 181,000. We also estimate that the number of daily deaths peaked around April 20 at 2,300, but there is considerably more uncertainty when compared with the infection model. The peak of the mortality data comes with a delay of about one week after new infections have peaked. Given what we know so far about the course of COVID-19, this lag appears short since the time from infection to death appears to be about four to five weeks. However, we are measuring as new cases those who have been tested, and this group is dominated by those who have already developed more serious complications. Figure 3 also shows the increased volatility of the recent mortality data, which affects the precision of the projections. Specifically, we cannot rule out that the peak of daily deaths has been reached since the uncertainty region for several days out includes values that are considerably higher. In Figure 4, we perform the same exercise as before where we estimate the mortality model for samples up to 14 and 28 days ago. Forecast uncertainty based on the April 5 data is very wide. The forecast left open the possibility that cumulative deaths would reach fewer than 50,000 by the start of August. At the time of the estimates, the sample was simply too short to result in tight inference. Moving the sample ahead to include data up to April 19 changes the outlook notably. In terms of cumulative deaths, the error bands are now contained within the April 5 region, while moving to the current sample tightens uncertainty further. The graph with the uncertainty region for new deaths suggests, however, that the reduction in uncertainty is coming from bounding the forecast distribution from below. That is, the model now puts more weight on a higher number of fatalities than could have been expected on April 5. Using a statistical model of the COVID-19 pandemic that attempts to capture the underlying patterns and evolution of infections and deaths, we project that by the start of August there will be 2.3 million observed cases of COVID-19 infections, which translates to 0.71 percent of the U.S. population. At the same time, we forecast 159,000 fatalities for the same time period. Neither new infections nor daily deaths are likely to have returned to zero by then. The uncertainty surrounding these estimates is still considerable, with deaths ranging between 140,000 to 181,000. As more data become available, the estimates of the underlying pattern of the epidemic will become more precise and the uncertainty surrounding these forecasts will decline. Our forecasts are implicitly predicated on the assumption that the public health policies that have been put in place will not change over the course of the forecast horizon. In that sense, our forecasts provide an assessment of whether and to what extent these policies are successful. However, it is unlikely that they will continue, which will then affect the time path of the pandemic. The value of these forecasts thereby lies in highlighting the range of possible outcomes in a no-change scenario, which can serve as a benchmark to evaluate alternative public health measures against. Paul Ho is an economist and Thomas Lubik is a senior advisor in the Research Department of the Federal Reserve Bank of Richmond. Christian Matthes is an associate professor in the Department of Economics at Indiana University. We can contrast this estimate with the one reported in our Regional Matters post "Forecasting the COVID-19 Pandemic in the Fifth District" based on data up to April 20. We estimated the peak to be several days earlier and the decline in new infections much steeper. Since then, the new data seemed to cluster around a plateau that by itself would have pushed out the peak estimate further. However, our initial model specification was not well-suited to handle a data pattern that included such plateauing. We therefore modified the model slightly by including an additional parameter designed to capture this pattern, which improved fit. This article may be photocopied or reprinted in its entirety. Please credit the authors, source, and the Federal Reserve Bank of Richmond and include the italicized statement below. Views expressed in this article are those of the authors and not necessarily those of the Federal Reserve Bank of Richmond or the Federal Reserve System.
{"url":"https://www.richmondfed.org/publications/research/coronavirus/economic_impact_covid-19_05-08-20","timestamp":"2024-11-05T11:14:34Z","content_type":"application/xhtml+xml","content_length":"59586","record_id":"<urn:uuid:bf548be3-e968-4150-9fbb-1b3571e777f7>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00030.warc.gz"}
Stickman Logic 1. A program for the beautification of logic It has been argued that a major obstacle to the development of logical rigour in a formal mode is the lifelessness of an argument expressed as a mere handful of squiggles and chicken feet (cf. Searle [1980]). To illustrate this alleged lifelessness, we note that the following expresses a logical truth but does so in a drab, Spartan way. What are P's and Q's, besides things that students are admonished to mind for reasons they know not what? Indeed, the main division between the so-called Continental and Analytic traditions has been disputes over whether the task of being unclear should be carried out in natural language or in a formal system. Below we develop a formal system equivalent in power to the usual sentential logic, but deployed in a humane and aesthetic framework. 2. Formalism with a human face We define the following symbols as the five logical constants negation, conjunction, disjunction, conditional, and biconditional: We may define a well-formed formula (wff) of Stickman Logic in the usual way. 1. Any atomic proposition is a wff. 2. If 3. If 4. If 5. All and only wff's can be generated from the above rules. Just as we may sometimes omit parentheses from the usual propositional calculus, we will omit There remain several difficulties, which we address briefly in the remaining sections. 3. Begriffuntvolkschrift It is a serious shortcoming of the system that it only treats the propositional calculus. We acknowledge that a predicate generalization of Stickman Logic-- a logic not of mere relations, but of human relations-- will necessarily be a goal of further work in the field. 4. Modern logic meets modern art The limits of typography and notation may be seen as barriers to the new logic. This objection takes two forms. First, setting type with Stickman Logic may increase the costs of publication. Logicians already working within a narrow margin of profit may thus reject its material implications. With the increasing use of digital technology, however, this argument seems specious. Second, the symbols of the new logic require more pen strokes than the old symbols. Writing by hand, in this sense already digital, must be done with an eye to economy. Both these limitations may be overcome by, where necessary, writing with the old symbols but understanding them as abbreviations for the symbols of Stickman Logic. We may write P v or again , but we will be mindful that these are mere shorthand for and . We thus reach a point at which the same symbols that codified dehumanization, now understood in a different way, embody the humanistic insights of Stickman Logic. 5. From modern logic to postmodern logic The teaching of science and mathematics must be purged of its authoritarian and elitist characteristics, and the content of these subjects enriched by incorporating the insights of the feminist, queer, multiculturalist, and ecological critiques. (Sokal [1996]) We recognize that Stickman Logic qua Stickman embodies certain patriarchal assumptions about the nature of right thinking. This critique is easily addressed, but hides a deeper critique inside itself. Although Stickman Logic may be generalized so as to embrace pluralism, the development of a generalized Stickbeing Logic would only mark a return to the quest for a Univeral Logic of Sticks. Collapse into this phallologocentric quest may ultimately render Stickman Logic impotent as a tool of liberation. Postscript- 2001 It is hard to say what has happened to the new Stückmenschen Wissenschaft in the ten months since this paper was originally released. One may get a sense of the stückgeist, however, by looking at any of the further work that has been done. Consider, for instance, the following letter which we provide in its entirety. April 10, 2001 Mr./Ms. Magnus: I believe I've solved your patriarchal problem with Stick[person] Logic. Attached is an image in Graphical Interchange Format showing replacement symbols for the five logical constants. Note that all are now sexually ambiguous, showing both male and female characteristics. In an additional bow to diversity, the negation constant is gay, and the conditional is pregnant. Thus these symbols also speak to freedom of sexual preference and the power of motherhood. There may, as you suggest, be some problems with expressing these typographically. But -- as you also pointed out -- the use of the old symbols as abbreviations overcomes this difficulty. Better still, the old symbols now "[incorporate] the insights of feminist, queer, multiculturalist, and ecological critiques." (The use of Egyptian symbols for male and female addresses the multiculturalist insight, the use of stick symbols in general -- representing a return to a simpler life -- expresses the ecological insight.) One question remains: in "[embracing] pluralism" does this new "Stickperson Logic" truly "mark a return to the quest for a Universal Logic of Sticks"? If so, whether our new symbolism will "ultimately render Stick[person] Logic impotent as a tool of liberation" remains to be seen. Charles F. Munat Seattle, Washington One must applaud this extension of the insights of the original paper and be heartened by Munat's suggestions. Nevertheless, there is a lingering suspicion that this logic is "impotent as a tool" for much of anything. Munat notes in other correspondence that although Stickman Logic "was effectively stillborn... [it] should be accorded the gravity that is its due." It can ask for no fairer a hearing than that. Stickman Logic was originally published June 19, 2000 by P.D. Magnus. P.D. would like to thank Casey Schroeder for comments on the original symbolism and Charles Munat for the postscript's epistolary
{"url":"https://www.fecundity.com/pmagnus/stickmanlogic/","timestamp":"2024-11-13T05:33:46Z","content_type":"text/html","content_length":"14387","record_id":"<urn:uuid:f7c3d93e-c2a7-4b49-914b-7c8278fc7a07>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00197.warc.gz"}
Expected Counts in Two Way Tables - Knowunity Expected Counts in Two-Way Tables: AP Statistics Study Guide Ahoy, budding statisticians! 📊 Prepare to dive into the sea of two-way tables and emerge as masters of chi-squared tests. Whether you’re stuck in a hypothesis tangle or simply can't distinguish between independence and homogeneity, this guide is your lifeboat. Let's decode this statistical wizardry and make the numbers dance! Navigating Two-Way Tables and Chi-Squared Tests Two-way tables allow us to analyze how categorical variables interact. Picture it as sorting M&Ms by color and then by size to check if there’s any pattern. To see if this pattern is a coincidence or a meaningful connection, we use chi-squared (χ²) tests. 🍬📏 Test for Homogeneity A chi-squared test for homogeneity compares the distribution of a categorical variable across different groups. Imagine you’re a chef testing three recipes for cookies 🍪. You want to know if the popularity of cookie flavors is consistent across different batches. By performing this test, you check if each batch loves chocolate chip 🍫 just as much (or little) as the others. Test for Independence The chi-squared test for independence examines the relationship between two categorical variables within a single group. Let's say you’re exploring if wearing superhero capes 🦸 affects the choice of ice cream flavors 🍦 at a party. This test helps determine if these two variables (cape wearers and ice cream choice) are connected or if they exist in parallel universes of deliciousness. Expected Counts: Calculating the Matrix of Destiny Regardless of your chosen chi-squared test, we need the expected counts—our secret sauce! Imagine your observed counts table is a treasure map. To verify its legitimacy, we compute an expected counts map based on statistical assumptions (sans pirates 🦜 or buried gold). Here’s the step-by-step map to creating the expected counts: 1. Lay Out the Land: Start with each category’s total counts. 2. Perform Multiplication Magic: For each cell, multiply the respective row total with the column total. 3. Divide and Conquer: Divide the resulting value by the grand total (table total). The formula, in plain English, is: [ \text{Expected Count} = \frac{(\text{Row Total} × \text{Column Total})}{\text{Table Total}} ] Sounds like potion making, right? Let's see it in action. Example Spell Suppose you have a table showing how many people prefer cats 🐱 versus dogs 🐶 based on whether they live in urban or rural areas. Here’s how you would calculate the expected count for the "Urban-Cats" | | Cats | Dogs | Total | | ------------- | ---- | ---- | ----- | | Urban | 50 | 70 | 120 | | Rural | 40 | 40 | 80 | | Total | 90 | 110 | 200 | For the "Urban-Cats" cell: • Urban Total: 120 • Cats Total: 90 • Table Total: 200 [ \text{Expected Count} = \frac{(120 \times 90)}{200} = 54 ] Repeat this sorcery for each cell. Your table of expectation would be: | | Cats | Dogs | | ------------- | ---- | ---- | | Urban | 54 | 66 | | Rural | 36 | 44 | Voila! You have your expected counts, ready to compare with your observed counts. Applying the Tests Once you've conjured up your expected counts, you’re set to run the chi-squared tests. For both homogeneity and independence, you’ll calculate the chi-squared statistic to see if your observed data diverges significantly from what you’d expect by chance. This process can help answer various questions, such as: • Homogeneity: Is the distribution of superhero fans (Marvel vs. DC) the same across different cities? 🦸♂️🦹♀️ • Independence: Is there a relationship between caffeine consumption and preference for morning vs. night classes among students? ☕🌆 Key Terms to Review • Chi-Squared Test for Homogeneity: It compares whether different groups have similar distributions across categories. • Chi-Squared Test for Independence: It examines the relationship between two categorical variables within a single population. • Expected Counts: These values represent what we expect to observe in each category if there’s no association between variables. • Proportions: The relative amount or share of a characteristic within a population or sample. • χ² Tests: These statistical tests determine if there’s a significant association between two categorical variables. And there you have it, your crash course in navigating two-way tables and chi-squared tests! With this guide, you're equipped to tackle expected counts like a pro. May your data be ever precise, and your χ² statistics low (or high if that's what your hypothesis needs)! Good luck on your AP Statistics journey, and remember—statistically speaking, you’ve got this! 🎩✨
{"url":"https://knowunity.com/subjects/study-guide/expected-counts-two-way-tables","timestamp":"2024-11-11T16:10:52Z","content_type":"text/html","content_length":"243080","record_id":"<urn:uuid:169c4a54-6640-453a-bf30-49865491b6f2>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00105.warc.gz"}
Maximum Likelihood for Cross-Lagged Panel Models with Fixed Effects - Allison et al. 2017 Replied on Fri, 01/13/2023 - 19:56 Welcome to OpenMx! I don't see anything obviously wrong with your model specification, but there are a couple of standard things to check. At the risk of being a bit pedantic, let me explain the error message. "fit is not finite": I tried to calculate the fit function value, but I got was infinity, negative infinity, or NA. "The continuous part of the model implied covariance is not positive definite": The model-implied covariance matrix is not acting like an actual covariance matrix. The "positive-definite" language is a generalization of the requirement that variances are always greater than zero (i.e., positive): by analogy, covariance matrices are always positive definite, but your model-implied covariance matrix appears not to be. Now, on to what you can do. First, it looks like you're using the haven package to read in data. My experiences with haven have not been positive. Check to make sure wages is a data.frame or matrix by running is(wages). If not, you can use read.table() instead of haven, or cast it to a data.frame with ds <- as.data.frame(wages). Second, you can check the model-implied covariance matrix at the starting values with mxGetExpected(model1, 'covariance') round(eigen(mxGetExpected(model1, 'covariance'))$values, 3) # [1] 473.035 1.172 0.688 0.578 0.551 0.285 0.000 0.000 0.000 0.000 0.000 0.000 #[13] 0.000 0.000 0.000 0.000 0.000 -0.138 -0.315 -1.018 -2.912 -4.927 Your starting values imply 11 zero eigenvalues and 5 negative ones. This is probably the issue. Set starting values (particularly variances) large enough that all the eigenvalues of the model-implied covariance matrix are positive. A built-in helper function for this is mxAutoStart(). smodel <- mxAutoStart(model1) run <- mxRun(smodel) Third, it looks like you have a free parameter named "ed" and also a variable named "ed". I don't know that this causes a problem or your problem, but it's probably not a good idea. Use a different variable name or free parameter label. Fourth and finally, think about ways to make the model specification shorter and less tedious. This is just a style question. For example, many of the repeated paths in your script could be replaced with the following: mxPath('ed', paste0('wks', 2:7), values=1, labels='edpath') mxPath(from=paste0(c('wks', 'lwages', 'union'), rep(1:6, each=3)), to=rep(paste0('wks', 2:7), each=3), labels=rep(c('lambda', 'beta2', 'beta3'), times=6)) Replied on Sat, 01/14/2023 - 14:36 In reply to Check the data and starting values by AdminHunter Dear Micheal Hunter, Dear Micheal Hunter, thank you very much for your detailed answer. I'll check whether I can fix my problems with the help of your comments in the following days. Thanks a lot! All the best to you! Replied on Sun, 01/22/2023 - 08:17 Any further hints? - lavaan, umx packages Thanks again for your comment. I tried to solve the problem with the help of your instructions but unfortunately it didn't work. An updated version can be found attached ("TestOpenMx_new.R"). Among other things, I updated the starting parameters with regard to the suggestions produced by "mxAutoStart()" but, as I said, it still did not solve the problem. In order to approach to the core of the issue, I tried to estimate a short or reduced-version of my general model. But even that shows the same error message ("TestOpenMx_shortmodel"). Additionally, I converted the correct lavaan-Code, provided by the authors, by using the "metaSEM"-package into OpenMx. It produced some results but those results were completely different than the original results provided by lavaan. Do you have any ideas what could be the reason for this? I also attached this R-code ("lavaan.R"). Furthermore, I stumbled across the umx package. However, I can't find any explanations where exactly the difference to OpenMx is. Out of curiosity I tried the OpenMx script slightly modified with the umx package. However, this has also led to no positive result ("umx_new.R" - However, this might not be important right now). Maybe someone of you has experience with this and knows the difference to I hope my explanations are not too excessive. Basically, I am trying to reproduce the paper pinned above with OpenMx. Since this doesn't work, I'm currently trying different possible solutions, but they don't currently lead to the right solution. Hopefully, someone in this forum is able to help me to solve the issue. Many thanks in advance! File attachments Replied on Fri, 01/27/2023 - 11:00 AdminNeale Joined: 03/01/2013 In reply to Any further hints? - lavaan, umx packages by ppl Missing the residual variances on 17/23 variables I plotted your model to take a look at it. I also had a look at the eigenvalues of the expected covariance matrix, like this: which revealed a lot of negative eigenvalues. The diagram shows that there aren't residual variances on many of the variables. Changing starting values won't fix this problem. require(umx); plot(model1) I partly fixed it with making all variables have residual variances by replacing the few variables that did have residuals with the entire list of them. Then using mxTryHard() gets over the issue that these were all starting at zero. However, better is to explicitly tell it to start the residual variances at roughly the observed variances of the manifest variables, and to avoid giving the variables high covariances between them. mxOption(key='Number of Threads', value=parallel::detectCores() - 1) # arrow = 1 from -> to # arrow = 2 from <-> to # free = FALSE -> fixed path (parameter values are ex ante fix) # free = TRUE is a free path (parameter values are estimated) # from: sources of the new paths # values: Starting values of parameters # connect: type of source to sink connection # labels: name of the paths # values is the starting value of a path # reasonable values for faster convergence # first try to replicate model in # "Linear dynamic panel-data estimation using maximum likelihood and structural equation modeling" # yit = λyit−1 + x∗itβ + w∗iδ + αi + ξt + υit ∀i, t # E(υit|yt−1,i , xt,i, wi, αi) = 0 ∀i, wages <- haven::read_dta("https://www3.nd.edu/~rwilliam/statafiles/wageswide.dta") wages <- as.data.frame(scale(wages)) #wages <- as.matrix(wages) ## 595 households who reported a non-zero wage (measured for 7 years: 1976 - 1982) # y = wks: number of weeks employed in each year # x = union (dummy): 1 if wage set by union contract # w = lwage: ln of wage in each year # z = ED: years of education in 1976 data <- mxData(wages, type = "raw") latents <- "alpha" manifests <- c("wks1", "union1", "lwage1", "wks2", "union2", "lwage2", "wks3", "union3", "lwage3", "wks4", "union4", "lwage4", "wks5", "union5", "lwage5", "wks6", "union6", "lwage6", "wks7", "union7", "lwage7", "ed") cov_data <- cov(wages[,manifests]) model1 <- mxModel("Test1", type = "RAM", # Manifest variables manifestVars = c("wks1", "union1", "lwage1", "wks2", "union2", "lwage2", "wks3", "union3", "lwage3", "wks4", "union4", "lwage4", "wks5", "union5", "lwage5", "wks6", "union6", "lwage6", "wks7", "union7", "lwage7", "ed"), # Latent variable latentVars = c("alpha"), #mxPath(from = "alpha", to = manifests, # free = FALSE, values = c(rep(1, 21), 0)), # Correlation between alpha and all exogenous variables, except "ed" mxPath(from = c("alpha"), to = c("wks1", "union1", "lwage1", "union2", "lwage2", "union3", "lwage3", "union4", "lwage4", "union5", "lwage5", "union6", "lwage6", "union7", "lwage7", "ed"), arrows = 2, connect = "unique.pairs", free = FALSE, values = c(rep(1, 15), 0)), # Correlation between alpha and endogenous variables, fixed effects mxPath(from = c("alpha"), to = c("wks2", "wks3", "wks4", "wks5", "wks6", "wks7"), arrows = 1, connect = "unique.pairs", free = FALSE, values = c(rep(1, 6))), # Correlation between exogenous and endogenous variables mxPath(from = c("ed", "wks1", "lwage1", "union1"), to = c("wks2"), arrows = 1, connect = "unique.pairs", free = TRUE, values = c(rep(1, 4)), labels = c("edpath", "lambda", "beta2", "beta3")), mxPath(from = c("ed", "wks2", "lwage2", "union2"), to = c("wks3"), arrows = 1, connect = "unique.pairs", free = TRUE, values = c(rep(1, 4)), labels = c("edpath", "lambda", "beta2", "beta3")), mxPath(from = c("ed", "wks3", "lwage3", "union3"), to = c("wks4"), arrows = 1, connect = "unique.pairs", free = TRUE, values = c(rep(1, 4)), labels = c("edpath", "lambda", "beta2", "beta3")), mxPath(from = c("ed", "wks4", "lwage4", "union4"), to = c("wks5"), arrows = 1, connect = "unique.pairs", free = TRUE, values = c(rep(1, 4)), labels = c("edpath", "lambda", "beta2", "beta3")), mxPath(from = c("ed", "wks5", "lwage5", "union5"), to = c("wks6"), arrows = 1, connect = "unique.pairs", free = TRUE, values = c(rep(1, 4)), labels = c("edpath", "lambda", "beta2", "beta3")), mxPath(from = c("ed", "wks6", "lwage6", "union6"), to = c("wks7"), arrows = 1, connect = "unique.pairs", free = TRUE, values = c(rep(1, 4)), labels = c("edpath", "lambda", "beta2", "beta3")), mxPath(from = c("wks2"), to = c("union3", "union4", "union5", "union6"), arrows = 2, connect = "unique.pairs", free = TRUE, values = c(rep(1, 4))), mxPath(from = c("wks3"), to = c("union4", "union5", "union6"), arrows = 2, connect = "unique.pairs", free = TRUE, values = c(rep(1, 3))), mxPath(from = c("wks4"), to = c("union5", "union6"), arrows = 2, connect = "unique.pairs", free = TRUE, values = c(rep(1, 2))), mxPath(from = c("wks5"), to = c("union6"), arrows = 2, connect = "unique.pairs", free = TRUE, values = 1), # Residual variances mxPath(from = manifests, arrows = 2, connect = "single", free = TRUE, values = c(rep(10, 5)), labels = rep("residual", 6)), mxPath(from = "one", to = manifests, arrows = 1, free = TRUE, values = 1), mxData(wages, type = "raw", numObs = 595)) round(eigen(mxGetExpected(model1, "covariance"))$values, 3) smodel1 <- mxAutoStart(model1) mxGetExpected(smodel1, "covariance") fitmodel <- mxTryHard(model1) File attachments Replied on Tue, 01/31/2023 - 11:25 In reply to Missing the residual variances on 17/23 variables by AdminNeale Dear Michael Neale, Dear Michael Neale, thank you very much for your helpful answer. It helped a lot to get rid of the error messages! All the best to you!
{"url":"https://openmx.ssri.psu.edu/comment/9708","timestamp":"2024-11-08T08:42:00Z","content_type":"text/html","content_length":"51576","record_id":"<urn:uuid:328abff0-28c1-419a-a022-efb2445e6828>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00513.warc.gz"}
NCERT Solutions for Class 12 Maths - CoolGyan NCERT Solutions for Class 12 Maths – FREE PDF Download Are you looking for the most accurate NCERT Solutions for Class 12 Maths, then CoolGyan.Org is the perfect website to download it in PDF format for absolutely free of cost. You might be aware that the curriculum for all schools that follow the Central Board of Secondary Education (CBSE) is set by The National Council of Education Research and Training (NCERT) across the nation. The NCERT Solution for Class 12 Maths PDF are as per the syllabus and helps students solve the exercises given in the textbooks. Maths Revision Notes for Class 12 NCERT Solutions for Class 12 Maths • Chapter 1 – Relations and Functions • Chapter 2 – Inverse Trigonometric Functions • Chapter 3 – Matrices • Chapter 4 – Determinants • Chapter 5 – Continuity and Differentiability • Chapter 6 – Application of Derivatives • Chapter 7 – Integrals • Chapter 8 – Application of Integrals • Chapter 9 – Differential Equations • Chapter 10 – Vector Algebra • Chapter 11 – Three Dimensional Geometry • Chapter 12 – Linear Programming • Chapter 13 – Probability NCERT Solutions for Class 12 Maths – Free PDF Download Class 12 is an important milestone in your life as you take some serious decisions about your future based on your performance. Indeed, Mathematics forms a vital component of Class 12 as well as every competitive exams’ syllabus. Whether you choose to be an engineer or a doctor, you will have to deal with Mathematics in every area. Making the concepts of Class 12 concrete can immensely help you in securing good marks in the CBSE Class 12 board exam as well as in various competitive exams. CBSE NCERT Solutions for Class 12 Maths plays a vital role in your exam preparation as it has detailed chapter-wise solutions for all exercises. First and foremost, you should imprint this in your mind that NCERT books are the biggest tool that you can have to get well-versed with the fundamentals. This will help you to not leave out any important question in your exam preparation. Benefits of accessing Class 12 CBSE Maths NCERT Solutions are not only confined to your success in Maths, but it also lays a foundation for other super important subjects. Getting a good grip over exhaustive details of some chapters of Maths like Calculus, differential equations etc can be of great help while understanding the derivations and concepts behind the numerical of Science. You can easily download FREE CBSE Class 12th NCERT Maths Solutions PDF of every Chapter of Mathematics. The solutions for each exercise have been curated and reviewed by some of the best teachers across India. It covers all the answers to the questions from 13 important chapters from the NCERT Mathematics textbook. Class 12 Maths NCERT Solutions covers crucial and intricate chapters such as Differentiation, Integration, Algebra, and others. The chapters are designed sequentially and logically, which allows you to understand the concepts easily. Let us look into the Class 12 Maths Chapters in detail:- Chapter 1: Relations and Functions The first chapter of CBSE Class 12 deals with the notion of relations and functions, domain, co-domain and range which have already been introduced in Class 11 syllabus along with various types of specific real-valued functions and their respective graphs. In this chapter, you will be solving a total of four exercises’ questions entailing all the essential topics in them. Chapter 2: Inverse Trigonometric Functions There are many functions which are not one-one, onto or even both and hence we cannot talk of their inverses. In Class 11 studied that trigonometric functions are not one-one and onto over their natural domains and ranges and hence their inverses do not exist. In CBSE Class 12 Inverse Trigonometric Functions, there are only two exercises where you will study about the restrictions on domains and ranges of trigonometric functions which enables the visibility of their inverses and observe their behaviour through graphical representations. Besides, some elementary properties will also be discussed. The inverse trigonometric functions hold great value in calculus for they serve to define many integrals. Inverse trigonometric functions’ concepts are applied vastly in science and Chapter 3: Matrix You need to know about Matrices as it is instrumental in various branches of mathematics. Matrices are considered as one of the most powerful tools in mathematics. It simplifies our work to a great extent when compared with other methods. There are a total of 62 questions in 4 exercises of this chapter. In this chapter, you will be delving deeper into the fundamentals of matrix and matrix Chapter 4: Determinants In chapter 3, matrices and algebra of matrices will be explained. Whereas, in this chapter, you will be studying about determinants up to order three only with real entries. Also, the six exercises are distributed in a way that you will study various properties of determinants, cofactors and applications of determinants in finding the area of a triangle, minors, adjoint and inverse of a square matrix, consistency and inconsistency of system of linear equations and solution of linear equations in two or three variables using inverse of a matrix in these exercises. Chapter 5: Continuity and Differentiability This chapter is the extension of differentiation of functions which you have studied in Class XI. You must have learnt to differentiate certain functions like polynomial functions and trigonometric functions. This chapter explains the very important concepts of continuity, differentiability and relations between them. You will be learning about the differentiation of inverse trigonometric functions. Furthermore, you will be getting acquainted with a new class of functions called exponential and logarithmic functions. There are a total of eight exercises in this chapter so you will have to dedicate some extra time and effort to get well-versed with this chapter. Chapter 6: Application of Derivatives In Chapter 5, you will learn how to find the derivative of composite functions, implicit functions, exponential functions, inverse trigonometric functions, and logarithmic functions. In this chapter, you will study the applications of the derivative in various disciplines such as engineering, science, and many other fields. For instance, we will learn how the derivative can be used to determine the rate of change of quantities and to find the equations of tangent and normal to a curve at a point and many more usage of derivatives. We will also use the derivative to find intervals of increasing or decreasing functions. There are five exercises in total where you will get a detailed insight into the Application of Derivatives. Finally, the derivative is used to find an approximate value of certain quantities. Chapter 7: Integrals Differential Calculus is based on the idea of a derivative. Derivative came into existence for the problem of defining tangent lines to the graphs of functions and calculating the slope of such lines. Integral Calculus eases the problem of defining and calculating the area of the region bounded by the graphs of the function. With a total of eleven exercises, you will have to learn every topic and their related questions with sheer concentration. Chapter 8: Application of Integrals Here, in this chapter, you will study some specific application of integrals to find the area given under the simple curves, also, the area between lines and arcs of circles, parabolas and ellipses (standard forms only). There are two exercises in this chapter in which you will also deal with finding the area bounded by the above-said curves. Chapter 9: Differential Equations In Differential Equations, you will be studying some basic concepts related to the differential equation, general and particular solutions of a differential equation, formation of differential equations, number of methods to solve a question based on first order – first-degree differential equation and some applications of differential equations in different areas. There are a total of six exercises in this chapter for the students. Differential equations are implemented in a plethora of applications in all the other subjects and areas. Hence, if you study the Differential equations chapter in a detailed manner, it will help you to gain a detailed insight into all modern scientific investigations. Chapter 10: Vector Algebra Did you know those quantities that involve only one value (magnitude), which is a real number is called scalars? Whereas, quantities that involve magnitude and direction are called Vectors. In chapter Vector Algebra, you will be studying about some basic concepts of vectors, various operations on vectors, and they’re algebraic as well as geometric properties. Chapter 11: Three Dimensional Geometry The chapter Three Dimensional Geometry will take you through the study of the direction cosines and direction ratios of a line joining two points and also about the equations of lines and planes in space under different conditions. Along with that, you will also get to know about the angle between two lines, a line and a plane, two planes, the shortest distance between two skew lines and distance of a point from a plane. Chapter 12: Linear Programming This chapter has only two exercises in which you will study some linear programming problems and their solutions by graphical method only, though there are many other methods also available to solve such problems. Chapter 13: Probability The probability will take you through the important concepts of the conditional probability of an event given that another event has occurred, which will, in turn, clarify the Bayes’ theorem, multiplication rule of probability and independence of events. With a total of five exercises, you will also learn an important concept of a random variable and its probability distribution and the mean and variance of a probability distribution. The Class 12 Maths NCERT Solutions provided on CoolGyan have been written to help you understand all the chapter-wise problems in the latest and updated textbooks prescribed by NCERT. Being the most preferred study material, NCERT Solutions will help the students in various ways. Let us look into this:- Key Benefits of CBSE Class 12 Maths NCERT Solutions from CoolGyan: • You can easily grasp some crucial tips to answer some particular complex questions in a detailed manner. • You no longer have to turn several pages for revision before the examination. The Maths Class 12th NCERT Solutions PDF Download is available on our website for FREE which you can access anytime, • Also, all the chapters’ solutions are arranged in a sequential manner which enables smooth learning. Maths is a sequential subject and it is important that it is studied in a right manner. • The solutions are explained in a detailed manner. Step by step solutions provided for all the exercise questions helps you to know the subject in a detailed manner. • The shortcuts help you to take your preparation into next level. CoolGyan NCERT Solutions will help you in improving your mathematical skills as well.
{"url":"https://coolgyan.org/ncert-solutions/ncert-solutions-class-12-maths/","timestamp":"2024-11-03T16:56:41Z","content_type":"text/html","content_length":"108489","record_id":"<urn:uuid:693fea79-0748-49ea-b9dc-8f42a13ca56d>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00088.warc.gz"}
Demystifying Sort Time Complexity – A Comprehensive Guide to Sorting Algorithms Introduction to Sorting Algorithms Sorting algorithms are an essential part of computer science and have a wide range of applications. Whether you are organizing a set of data or looking for specific information, sorting algorithms can make your life much easier. In this blog post, we will delve into the world of sorting algorithms, and specifically focus on their time complexity. What is time complexity? Time complexity is a measure of how much time an algorithm takes to complete as a function of the input size. It provides an estimate of the efficiency of an algorithm. A sorting algorithm with a lower time complexity would generally be considered more efficient, as it can handle larger data sets without a significant increase in processing time. Overview of Common Sorting Algorithms There are several common sorting algorithms used in computer science. In this section, we will explore three widely used ones: Bubble Sort, Selection Sort, and Insertion Sort. Bubble Sort Bubble Sort is a simple algorithm that repeatedly compares adjacent elements and swaps them if they are in the wrong order. It continues this process until the entire list is sorted. Pseudocode for Bubble Sort: “` 1. Set n to be the length of the list 2. Repeat the following steps n-1 times: – Compare each element with its adjacent element – If the elements are in the wrong order, swap them 3. Continue this process until the list is sorted “` Time complexity analysis: The time complexity of Bubble Sort is O(n^2) in the worst-case scenario, where n is the number of elements in the list. This is because it requires nested loops to compare and swap elements. Selection Sort Selection Sort is another simple sorting algorithm that finds the smallest element in the unsorted portion of the list and places it at the beginning. It then repeats this process for the remaining unsorted portion until the entire list is sorted. Pseudocode for Selection Sort: “` 1. Set n to be the length of the list 2. Repeat the following steps n-1 times: – Find the smallest element in the unsorted portion – Swap it with the first element of the unsorted portion 3. Continue this process until the list is sorted “` Time complexity analysis: The time complexity of Selection Sort is also O(n^2) in the worst-case scenario because it requires nested loops to find the smallest element and swap it with the first element of the unsorted Insertion Sort Insertion Sort works by partitioning the list into a sorted and an unsorted portion. It then iterates through the unsorted portion, comparing each element with the elements in the sorted portion and placing it in the correct position. Pseudocode for Insertion Sort: “` 1. Set n to be the length of the list 2. Repeat the following steps n-1 times: – Take the next unsorted element and insert it into the correct position in the sorted portion 3. Continue this process until the list is sorted “` Time complexity analysis: The time complexity of Insertion Sort is also O(n^2) in the worst-case scenario because it requires shifting elements to make room for the insertion. More Efficient Sorting Algorithms While Bubble Sort, Selection Sort, and Insertion Sort are relatively simple and easy to implement, they may not be the most efficient for large data sets. In this section, we will explore three more efficient sorting algorithms: Merge Sort, Quick Sort, and Heap Sort. Merge Sort Merge Sort is a divide-and-conquer algorithm that works by dividing the list into smaller sublists, sorting them recursively, and then merging the sorted sublists to produce the final sorted list. Pseudocode for Merge Sort: “` 1. Divide the unsorted list into two halves 2. Recursively sort each half 3. Merge the sorted halves back into a single sorted list “` Time complexity analysis: The time complexity of Merge Sort is O(n log n) in all scenarios, which makes it significantly more efficient than the previous sorting algorithms. It achieves this efficiency by dividing the list into smaller sublists and then merging them. Quick Sort Quick Sort is another divide-and-conquer algorithm that works by partitioning the list into two sublists based on a chosen pivot element, sorting the sublists recursively, and combining them to produce the final sorted list. Pseudocode for Quick Sort: “` 1. Choose a pivot element from the list 2. Partition the list into two sublists: elements smaller than the pivot and elements larger than the pivot 3. Recursively sort each sublist 4. Combine the sorted sublists to produce the final sorted list “` Time complexity analysis: The time complexity of Quick Sort is also O(n log n) in the average and best-case scenarios. However, in the worst-case scenario where the pivot element is poorly chosen, the time complexity can degrade to O(n^2). Despite this worst-case scenario, Quick Sort is often preferred due to its average-case performance and efficient partitioning. Heap Sort Heap Sort is a comparison-based sorting algorithm that uses a binary heap data structure to sort elements. It works by building a max-heap (or min-heap) from the list and repeatedly removing the root (largest) element until the list is sorted. Pseudocode for Heap Sort: “` 1. Build a max-heap (or min-heap) from the list 2. Replace the root element (largest for max-heap or smallest for min-heap) with the last element of the heap 3. Remove the last element from the heap 4. Percolate down the new root to restore the heap property 5. Repeat steps 2-4 until the heap is empty “` Time complexity analysis: The time complexity of Heap Sort is also O(n log n) in all scenarios. It achieves this efficiency by leveraging the properties of the heap data structure and the percolate down operation. Evaluating Sorting Algorithms When evaluating sorting algorithms, several factors need to be considered beyond just their time complexity. Here are some additional considerations: Best-case, average-case, and worst-case scenarios: Sorting algorithms may have different time complexities depending on the input data. It is important to evaluate their performance in best-case, average-case, and worst-case scenarios to understand their overall efficiency. Space complexity considerations: Sorting algorithms can have different space complexities, depending on whether they perform sorting in-place or require additional space for temporary storage. In-place algorithms are more memory-efficient, while those requiring additional space may offer better time complexities. Choosing the right sorting algorithm for specific scenarios: Different sorting algorithms have their own strengths and weaknesses. It is important to consider factors such as the size of the data set, the current state of the data (partially sorted or completely unsorted), and the available resources when selecting a sorting algorithm for a specific scenario. In this blog post, we explored various sorting algorithms and their time complexities. We started with simple and straightforward algorithms like Bubble Sort, Selection Sort, and Insertion Sort, which have quadratic time complexities. Then, we moved on to more efficient algorithms like Merge Sort, Quick Sort, and Heap Sort, which have time complexities of O(n log n). Understanding the time complexity of sorting algorithms is crucial for developing efficient and scalable applications. By choosing the right sorting algorithm based on the size and nature of the data set, you can significantly improve the performance of your applications. To continue your learning journey, I recommend diving deeper into the various sorting algorithms, experimenting with their implementations, and exploring additional topics such as stable sorting, parallel sorting, and external sorting.
{"url":"https://skillapp.co/blog/demystifying-sort-time-complexity-a-comprehensive-guide-to-sorting-algorithms/","timestamp":"2024-11-06T02:18:30Z","content_type":"text/html","content_length":"113258","record_id":"<urn:uuid:51d8558f-8c74-4dc0-bf6d-a90d1b81b68c>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00383.warc.gz"}
Sharpe ratio Sharpe ratio In this article, you'll learn about the Sharpe ratio and how it's calculated in Composer. What is it? The Sharpe ratio is a measure of an investment's return adjusted for its risk (another measure of risk-adjusted return is the Calmar ratio). The higher the Sharpe ratio, the better a return an investment gets per unit of risk taken. What is a “good” Sharpe ratio? A Sharpe ratio over 1 is considered good, over 2, very good, and over 3, excellent. An example Let’s take a look at an example. In this backtest, we see the symphony’s Sharpe ratio is 1.08 while the benchmark’s ($SPY) is 0.81. This means that the symphony had a better risk-adjusted return than the benchmark over the backtest time period. Traditional formula for the Sharpe ratio The Sharpe ratio, developed by Nobel laureate William Sharpe, is traditionally calculated as follows: (Return of investment - Risk-free rate of return) / Standard deviation of the investment’s return in excess of the risk-free rate The risk-free rate of return is the return of an investment with (hypothetically) zero risk, like the interest rate on three-month U.S. Treasury bills. The risk-free rate depends on what instrument is chosen (e.g., three-month or 10-year Treasury bills) and the time the investment is made, as the same instrument may perform differently at different times. In addition, using U.S. instruments to calculate the risk-free rate may not be appropriate for investors living in or investing in other countries. For these reasons, Composer does not include the risk-free rate in the calculation of the Sharpe ratio. In other words, the risk-free rate is assumed to be zero. How is the Sharpe ratio calculated in Composer? The Sharpe ratio you see in Composer is calculated by dividing the annualized arithmetic mean of the daily returns by the annualized standard deviation of the daily returns. Both the return of the investment and its standard deviation are annualized -- they are standardized to a period of a year. Why is the arithmetic mean of returns used in the calculation? The geometric mean is often used when calculating financial performance. The difference between the arithmetic mean and the geometric mean of returns is known as the volatility drag, because returns that vary over time have a lower geometric than arithmetic mean. The Sharpe ratio calculation corrects for this volatility drag by dividing the mean of the returns by the standard deviation of the returns (standard deviation is a measure of volatility). If the geometric mean were used in the numerator of the formula, the volatility drag would be corrected twice. While best practices are still up for debate, this practice could lead to overly-conservative estimates. Step-by-step calculation of the Sharpe ratio Let's go through the calculation of the Sharpe ratio step-by-step for a symphony backtest. Here's how we do it: For the numerator (Annualized arithmetic mean of the daily returns): • Compute each day’s percent return over the backtesting period • Compute the arithmetic mean of those daily returns (i.e., sum them and divide by the number of values, which is the number of trading days in the backtest) • Annualize that value, which in this case means multiply by 252, the typical number of trading days in a year. For the denominator (Annualized standard deviation of the daily returns): • Compute each day’s percent return over the backtesting period • Compute the standard deviation of those daily returns • Annualize that value, which in this case means multiplying by the square root of 252, the typical number of trading days in a year. Divide the annualized arithmetic mean of the daily returns by the annualized standard deviation of the daily returns.
{"url":"https://help.composer.trade/article/19-sharpe-ratio","timestamp":"2024-11-14T14:02:01Z","content_type":"text/html","content_length":"22959","record_id":"<urn:uuid:f5ff0a16-7225-4171-aac2-c25cdb8c1ac9>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00333.warc.gz"}
Discrete Vs Continuous Discrete vs Continuous Quiz Questions and Answers Welcome to the "Discrete vs Continuous" quiz, an engaging examination of a fundamental concept in mathematics and statistics. This quiz explores the key differences between discrete and continuous variables, providing a comprehensive understanding of these critical concepts. Data can be described in two ways, i.e., discrete and continuous. Discrete data can take on only integer values, whereas continuous data can take on any value. Are you ready for the Discrete Vs. Continuous quiz? This discrete or continuous quiz below is designed to assess and reinforce the student's understanding of the nature and differences between discrete and continuous data. Discrete and Read morecontinuous variables play an integral role in data analysis, probability, and various fields such as economics, engineering, and science. By taking this quiz, participants will enhance their knowledge of these variables and their applications. In the quiz, participants will encounter a series of thought-provoking questions that challenge their ability to differentiate between discrete and continuous data. This quiz is suitable for students, professionals, and anyone interested in building a solid foundation in statistical concepts. Upon completing the quiz, participants will gain a clear understanding of when to apply discrete or continuous variables in various situations, an essential skill for making informed decisions based on data analysis. Give it a try and see how well you understand it! • 1. The qualities of discrete data can be: □ A. □ B. □ C. □ D. Correct Answer B. Counted Discrete data refers to data that can only take on specific values and cannot be measured on a continuous scale. It can only be counted or enumerated. This means that the values of discrete data can be expressed as whole numbers or integers, such as the number of students in a class or the number of cars in a parking lot. Therefore, the correct answer is "Counted." • 2. The qualities of continuous data can be: □ A. □ B. □ C. □ D. Correct Answer A. Measured Continuous data refers to data that can take on any value within a specific range. It is not limited to specific values or categories. The term "measured" implies that continuous data can be quantitatively measured, such as temperature, height, or weight. On the other hand, counting is more suitable for discrete data, where values can only be whole numbers or specific categories. Therefore, the correct answer is "Measured" because continuous data can be measured rather than counted. • 3. Which of these is NOT continuous data? □ A. A person's weight each week □ B. The volume of water in the pacific ocean each day □ C. Bikes manufactured in a factory each day □ D. Correct Answer C. Bikes manufactured in a factory each day Bikes manufactured in a factory each day is not continuous data because it represents a count or a whole number value. Continuous data refers to measurements that can take any value within a given range, such as a person's weight or the volume of water in the Pacific Ocean. However, the number of bikes manufactured each day is discrete data, as it can only take specific integer values. Therefore, the correct answer is "Bikes manufactured in a factory each day". • 4. Which of these is NOT discrete data? □ A. Weight of a watermelon as measured each week □ B. How many students attend the class □ C. How many cars a company sells each day. □ D. Correct Answer A. Weight of a watermelon as measured each week The weight of a watermelon as measured each week is NOT discrete data because it can take on any value within a continuous range. Discrete data consists of distinct and separate values, such as the number of students attending a class or the number of cars a company sells each day, which can only be whole numbers. • 5. Daily rainfall is an example of what sort of data: □ A. □ B. □ C. □ D. Correct Answer B. Continuous Daily rainfall is an example of continuous data because it can take on any value within a certain range. It is measured on a continuous scale, such as millimeters or inches, and can have decimal values. Unlike discrete data, which can only take on specific values, daily rainfall can vary continuously and is not limited to distinct categories or intervals. Therefore, it is considered continuous data. • 6. The distance that a cyclist rides each day is what sort of data: □ A. □ B. □ C. □ D. Correct Answer B. Continuous Continuous data refers to data that can take on any value within a certain range. In the context of the question, the distance that a cyclist rides each day can vary and can take on any value within a certain range, such as 10 km, 15.5 km, or 20.3 km. Therefore, it is considered continuous data. • 7. The frequency of a cyclist riding over a few kms weekly is this sort of data. □ A. □ B. □ C. □ D. Correct Answer A. Discrete The frequency of a cyclist riding over a few kilometers weekly is an example of discrete data. Discrete data consists of distinct, separate values, often counted in whole numbers or integers. In this case, you would count how many times the cyclist rides over a few kilometers each week, which results in specific, countable values. Continuous data, on the other hand, represents measurements that can take on any value within a certain range, such as temperature or time, and can include decimal values. • 8. The number of coconuts produced by a coconut tree each year is continuous data. Correct Answer B. False The statement is false because the number of coconuts produced by a coconut tree each year is not continuous data. Continuous data is measured on a continuous scale and can take any value within a certain range. In this case, the number of coconuts produced is discrete data because it can only take whole number values (e.g., 0, 1, 2, 3, etc.) and cannot be measured as fractions or • 9. The average size of the coconut grown by a tree is continuous data. Correct Answer A. True Continuous data refers to data that can take on any value within a certain range. In the case of the average size of coconuts grown by a tree, the size can vary and can be measured on a continuous scale. It can be any value between the smallest and largest size of coconuts. Therefore, the statement that the average size of the coconut grown by a tree is continuous data is true. • 10. The combined tonnage of mail passing each day through the local postal center is discrete data: Correct Answer B. False The combined tonnage of mail passing through the local postal center is continuous data. Continuous data can take any value within a given range and can be measured with a high level of precision, including fractions or decimals. In this case, the weight of the mail can vary continuously, making it a continuous data type. Discrete data, on the other hand, consists of distinct, separate values or categories with no values in between. • 11. Which of these is continuous data? □ A. □ B. □ C. □ D. Correct Answer A. The dog weighs 46.6 kg. The dog weighs 46.6 kg is an example of continuous data because weight is a measurable quantity that can take on any value within a certain range (in this case, the weight of the dog can be any value greater than 0 kg). In contrast, the other statements provide categorical or discrete data, as they describe characteristics of the dog that can only have specific values (four legs, one tail, two ears). • 12. Which of these is discrete data? □ A. □ B. □ C. □ D. Correct Answer B. Paul has three sisters. The correct answer is "Paul has three sisters." This is discrete data because it represents a countable and distinct value. The number of sisters Paul has is a whole number and cannot be divided into smaller units. In contrast, Paul's weight, height, and jumping height are continuous data because they can take on any value within a certain range and can be measured in smaller units. • 13. Which of the following is a discrete variable? □ A. The height of individuals in a classroom. □ B. The number of students in a classroom. □ C. The temperature in degrees Celsius. □ D. The time it takes for a car to travel one mile. Correct Answer B. The number of students in a classroom. A discrete variable represents countable, distinct values, such as the number of students in a classroom. In this case, you can't have a fraction or a non-integer value for the number of students; it's always a whole number. The other options are continuous variables because they can take on a wide range of values, including fractions or decimals. • 14. You are recording the number of people waiting at a bus stop at different times of the day. Is this a discrete or continuous variable? □ A. □ B. □ C. Both discrete and continuous □ D. Neither discrete nor continuous Correct Answer A. Discrete The number of people waiting at a bus stop is a discrete variable because it can only take on whole number values (e.g., 0, 1, 2, 3, etc.). You can't have a fraction or a non-integer value for the number of people waiting. Discrete variables represent distinct, separate values and are appropriate for counting, while continuous variables can take on any value within a range. • 15. Consider the measurement of the time it takes for a computer program to execute a specific task. Is this a discrete or continuous variable? □ A. □ B. □ C. Both discrete and continuous □ D. Neither discrete nor continuous Correct Answer B. Continuous The time it takes for a computer program to execute a task can be a continuous variable. It can take on a wide range of values, including fractions or decimals. Unlike discrete variables, which represent distinct, countable values, continuous variables can take on any value within a range.
{"url":"http://jclamkin.com/storye94b.html","timestamp":"2024-11-02T05:04:23Z","content_type":"text/html","content_length":"460031","record_id":"<urn:uuid:b31dab30-89d8-4826-a9a3-e61f833d948f>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00196.warc.gz"}
What is the formula of circumference of base of cylinder? We have its circumference as 176cm. we can find the radius of the cylinder using the equationC=2πr. On multiplication we get, ⇒r=28cm. What do you mean by circumference of cylinder? The circumference of a cylinder, also called the perimeter, is the distance around that cylinder. How do you measure circumference? The circumference is the distance around a circle. In other words, it’s the perimeter of the circle. And we find the circumference by using the formula C = 2πr. How do you find the circumference of a base? The base of a cone is a circle. This means we can calculate its area using the formula 𝜋 multiplied by the radius squared and its circumference by multiplying two by 𝜋 by the radius. How do you find the circumference of a cylinder for development? To calculate the circumference of a circle, use the formula C = πd, where “C” is the circumference, “d” is the diameter, and π is 3.14. If you have the radius instead of the diameter, multiply it by 2 to get the diameter. What is a diameter of a cylinder? The diameter, or the distance across a cylinder that passes through the center of the cylinder is 2R (twice the radius). The surface area of an open ended cylinder (as shown) is 2 RL. How do you find the circumference of a balloon? Wrap a cloth tape measure around the center of the balloon to check its circumference. To get the radius of your inflated balloon, place a ruler on each side of the center of the balloon, measure the distance between them, and divide it in half. How do you find the circumference of a cylinder when given the volume? We know that the circumference of a circle of radius ‘r’ is C = 2πr. Thus, when the circumference of the base of a cylinder (C) and its height (h) are given, then we first solve the equation C = 2πr for ‘r’ and then we apply the volume of a cylinder formula, which is, V = πr2h. Is girth and circumference the same? Girth Measurement is a method to know the changes in body dimensions over time. Girths are circumference measures at standard anatomical sites around the body. It is measured with a tape and can be used in determining body size, composition and to monitor changes in these parameters. How do you find the diameter of a tank? EASY WAY – Take a piece of string, and wrap around the tank. Then cut or mark the string so you can measure the total Length. Divide that number by 3.14 to get your Diameter in Inches. The number will always be higher, since you are measuring the outside of the tank. What is the formula for the circumference of a cylinder? Formula for finding circumference of a cylinder. Circumference of cylinder = 2 x π x r(r + h) Bottom of Form. Introduction to Cylinder volume: A Cylinder is one of the most basic curvilinear geometric shapes, the surface formed by the points at a fixed distance from a given straight line, the axis of the cylinder. What is the formula for finding the density of a cylinder? Density is mass divided by volume. Assuming you can weigh the cylinder to get its mass, you need to calculate volume. The equation for this is pi*h*r^2, where h is the height or length of the cylinder, and r is its radius. Michael Clements. How do you calculate the diameter of a cylinder? The cubic volume of a cylinder is found by multiplying the radius times the radius times pi times the height. Find the radius and height. If the diameter is known then divide the diameter by 2 to get the radius. How do you find the lateral and surface areas of a cylinder? Lateral Surface Area of a Cylinder is directly proportional to the radius and the height of the Cylinder. Following is the formula for Calculating the Volume of a Cylinder. Lateral Surface Area of Cylinder V = 2* pi * r * h. To find the Lateral Surface Area of a Cylinder, enter the radius and height value in the Box and hit Calculate button.
{"url":"https://ru-facts.com/what-is-the-formula-of-circumference-of-base-of-cylinder/","timestamp":"2024-11-05T12:42:26Z","content_type":"text/html","content_length":"53890","record_id":"<urn:uuid:0e022806-8469-485a-8679-c8b1df8adf2a>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00597.warc.gz"}
Anomaly Detection — Product of Data Refinery "Data is the new Oil." The quote is credited to Clive Humby, who is believed to have used it first in 2006. It is widely accepted today that any data "refinery" will emit insights in the form of metrics. At eBay, applications emit thousands of metrics from various domains such as Checkouts, Search, Payments, Risk, Trust and Shipping to name a few. Maintaining a robust marketplace means systems must be monitored in real time. Essentially, this means tracking the behavior of metrics over time (short/long term) to generate quick and actionable feedback to systems. However, even with domain focus, the volume of metrics is so high that we need to put systems and processes in place that help call our attention to anomalous events in metrics. In this blog post, we explore a basic introduction to the realm of predictive analytics for metrics in the context of Anomaly Detection based on Models and also present a software design for detecting anomalies in metrics. More specifically, we explore time series forecasting as an effective way to achieve next step prediction for metrics. Further, this article will focus on metrics that may be represented as a sequence of scalar observations over discrete and regular time periods, a.k.a. univariate time series. Typically, anomaly detection involves taking historical metric data into consideration, training a model on the data, describing the pattern as a function of historical data points, which is applied in the form of hyper parameters for the model and making a prediction. The prediction is usually in the form of a band of lower value and upper value. On observing a new value in the time series, we identify if the actual value is outside the predicted range and classify the new value as anomalous. Below is a pictorial depiction of this process. A time series is comprised of three components: trend, seasonality and noise. A time series may be described additively as y(t) = Trend + Seasonality + Noise Alternatively, it may be described multiplicatively as y(t) = Trend * Seasonality * Noise Time series forecasting is a vast subject that is continually undergoing research, and new models and methods are being created. In our case, we start with a popular model called ARIMA (Auto Regressive Integrated Moving Average). ARIMA is also referred to sometimes as Box Jenkins model. Most of our metrics exhibit seasonal behavior and in some cases multi-seasonal behavior. As a general rule, we pick a seasonality that is closer to our prediction frequency, since that is the most influencing factor in our seasonal prediction. Examples of such seasonality may be weekly or in the case of intra-day predictions, we pick 24-hour seasonality, assuming that data behaves in similar ways at the same time on any two consecutive days. Next, we have a choice to introduce an external influence to the data in the form of another time series. We refer to this external influence as an exogenous variable. Put together, the model is known as SARIMAX (Seasonal Auto-Regressive Integrated Moving Average with eXogenous variable support). Let's have a look at the mathematical representation for ARIMA. • AR is a representation of a data point in terms of time-lagged versions of the point until p points: □ y[t ]= ∅[1] y[t-1] + ∅[2] y[t-2] + ∅[3] y[t-3] … + ∅[p] y[t-p] • I represents order of differencing to achieve stationarity • MA is a representation of past errors that help carry forward lessons learnt from past until q points: □ y[t ]= Θ[t] ∈[t-1 ]+ Θ[2] ∈[t-2] + Θ[3] ∈[t-3] … +Θ[q] ∈[t-q] • Together: □ ∆ y[t] = Σ^p[i=1]∅[i] ∆[d] y[t-i ]+ Σ^q[j-1] Θ[j] ∆[d] y[t-j] • SARIMAX is simply a product of ARIMA with non-seasonal hyper parameters and ARIMA with seasonal hyper parameters. For simplicity, we will only provide an abstract representation of SARIMAX below: □ ARIMA(p,d,q) (non-seasonal) X (P,D,Q)[S] (seasonal) Hyper parameter selection and tuning The hyper parameters of ARIMA are p, d, and q. The seasonal hyper parameters are denoted by P, D, Q. S refers to seasonal period for the time series. In addition to the above, the model also needs another parameter that describes the trend of the time series. Usually, the trend will be a constant or observed to vary linearly. Before we can fit the model to the series, we must choose the right hyper parameters. We can select the values of the hyper parameters in one of three ways: 1. Plot the ACF (Auto Correlation Function) and PACF (Partial Auto Correlation Function) and visually determine the values. 2. Use Auto-ARIMA, which is a version of ARIMA that automatically determines the best hyper parameters based on AIC (Akaike Information Criterion) and BIC (Bayesian Information Criterion) values. 3. Grid search the hyper parameters based on evaluating model performance by minimizing a pre-determined error criteria. The error criteria can be one of RMSE (Root Mean Square Error), MAE (Mean Absolute Error), MAPE (Mean Absolute Percentage Error), MASE (Mean Absolute Scaled Error), ME (Mean Error), MPE (Mean Percentage Error). In our implementation, we selected option 3 for selecting hyper parameters to avoid erroneous and subjective interpretations of ACF and PACF. The statistical measure, which we aimed to minimize the error of, is the mean, which is also the measure we output as part of the predicted values. These are also referred to as forecasts. (Here is a great article describing grid searching for SARIMA model: https://machinelearningmastery.com/how-to-grid-search-sarima-model-hyperparameters-for-time-series-forecasting-in-python/.) In addition, we also needed to decide at confidence interval at which the predicted mean needs to be output. In our case, we chose a 95% CI. What is an anomaly? Once the model is deployed, it outputs the forecasted values as a band, bound by the upper CI and lower CI. The forecasted values are compared with the actual values as they arrive in real time. A data point is said to be an anomaly if the actual value is observed to be outside the CI band. Let's consider a real-world metric and apply what we've learned so far. The metric we are using as an example represents counts at daily intervals of time. Below is a snapshot of the last 15 days of data. However, the more historical data we have leads to a more accurate prediction. Decomposing the time series into its constituents of Trend, Seasonality and Noise, we can see that the time series has weekly seasonality, which is useful while grid searching for hyper parameters. Feeding the time series to the grid searching routine to estimate hyper parameters, we get the following output. The output consists of 3 top models given by the lowest RMSE for mean out of thousands of combinations in the search space. The best performing model has p,d,q = 1,0,1 P,D,Q = 2,1,0,7 Trend = n, no explicit trend defined ("[(1, 0, 1), (2, 1, 0, 7), 'n']", 9221.353200844014) ("[(2, 0, 0), (2, 1, 0, 7), 'n']", 9280.010864427197) ("[(1, 1, 0), (2, 1, 0, 7), 'n']", 9280.970349459463) Over a period of time as the time series collects more historical data, we'll need to re-perform grid search to tune the hyper parameters and accommodate new data behavior. Anomaly detection process flow The entire process flow is described as a sequence of steps shown in the following diagram. The following line chart compares how actual values of the metrics have moved over time in relation to a forecasted range defined in terms of upper CI and lower CI. We can see that for most of the duration, the actual values are mostly within the forecasted range, which means our model has been forecasting accurately. Towards the end of the timeline, we do see potential anomalies that have been investigated and classified as true or false anomalies appropriately. Now that all the details of the model are in place, let's explore the engineering design to deploy in production. We'll also look at how the design and deployment is scaled for more metrics. Below is a simple design showing the logical components at play. 1. The four key blocks represent the Data Source, Data Processing, Data Sink and Data Visualization. 2. Data Sources may be comprised of a variety of databases that ideally support fast aggregation queries. For intra-day anomaly detection, we expect to have access to real-time data. 3. Data Processing involves the following functional blocks □ The Scheduler issues batch triggers to drive the Query Manager. Batch triggers are configured for frequencies at which we need to predict metric values, e.g Daily, Hourly, Three-Hourly. □ The Query Manager manages the selection and execution of predefined aggregated queries specific to metrics. □ A good software design support a degree of configurability. The Hyper Parameters and Alerts are externalized into a configuration file. □ The Model Executor reads the configuration for metrics and generates the model and the next-step forecast for the metrics. □ Forecast comparison and Alerts compares the forecasts with the actual values, and if the actual value of the metric is outside the Confidence Interval band of the predictions, a notification is sent based on the alert configuration. 4. The predicted values and the actual values are useful to understand how the model has been performing and therefore, we store the data back into a Data Sink, in this case Elastic Search. 5. Data Visualization is a key component with numerous dashboards with graphics representing raw data to be compared with anomaly data. This allows us to understand how actual values are moving in reference to forecasted values. Physically, the components of the Data Processing block reside inside a pod in a Tess cluster. Tess.io is eBay's unified cloud infrastructure based on Kubernetes. Multiple pods are then deployed on the cluster with each pod containing Docker containers for various services. It is recommended that a pod host metrics pertaining to a specific domain such as Risk, Payments, and so on offering containment of failures at pod levels. For fault tolerance, we recommend deploying pods into several regions/DCs in Active-Passive configuration. Physical architecture Handling anomalies A false anomaly is one that is essentially a false positive. Even though the actual value was outside the forecasted CI range, the value is attributed to a known/valid reason. In case of a false anomaly, we'd need to let the model learn from this behavior so that the particular false positive may be accounted for for further forecasting. In other words, we continue to use the actual value in further forecasting. A true anomaly is one that is essentially unexpected and we'll need to investigate further to attribute the right reasons to the anomaly. In this case, we need to prevent the model from learning this behavior since we successfully detected an anomaly. Hence, we replace the actual value with the predicted value, thereby ensuring that a similar data point deviation will continue to be flagged as anomalous for predictions. Time series forecasting is a complex subject with new research and methodologies being invented regularly. This article presents ARIMA as one of the more basic models in practice that allows for quickly generating returns. It is worth noting that ARIMA is technically a statistical model as opposed to a pure ML model. Additionally, metrics with low dimensionality and low cardinality are best suited to deploy ARIMA model. For metrics with high dimensionality and high cardinality of dimensions, anomaly detection scales better with the use of a model closer to pure Machine Learning deployed as a Recurrent Neural Network. 1. https://machinelearningmastery.com/how-to-grid-search-sarima-model-hyperparameters-for-time-series-forecasting-in-python/ 2. https://people.duke.edu/~rnau/411sdif.html
{"url":"https://innovation.ebayinc.com/tech/engineering/anomaly-detection/","timestamp":"2024-11-12T05:26:48Z","content_type":"text/html","content_length":"53823","record_id":"<urn:uuid:b483ae46-5c50-4615-9853-2283fd4eb7b4>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00797.warc.gz"}
Gyrokinetic Simulation in the 2000s Gyrokinetic Simulation in the 2000s [This overview was written during the period 2006-2009] In the text below we describe the prehistory, history and present use of the GYRO code. A broad spectrum of published code applications is summarized, and the key role played by GYRO within the US fusion program is clarified. More historical details can be found in the Spring 2008 Issue of SciDAC review, whereas the Technical Manual provides a detailed description of the underlying equations and discretization schemes. The use of gyrokinetic codes at General Atomics began in 1994 with the acquisition of the linear gyrokinetic stability code GSTOTAL [KRT95]. GSTOTAL was an Eulerian initial-value code with trapped and passing particles (ions and electrons) as well as collisional and electromagnetic physics. GSTOTAL represented an enormous technical advance in simulation capability, comparable in significance to Kotschenreuther’s prior invention of the well-known \(\delta f\) method [Kot88] for particle simulation. The time-advance scheme in GSTOTAL was fully implicit and allowed use of timesteps much larger than those imposed by the electron parallel Courant limit. Historically speaking, GSTOTAL was the first step toward a practical nonlinear gyrokinetic code. With the addition of plasma shaping and finite-\(\delta B\) effects, GSTOTAL evolved into the linear stability code GKS. GKS has been routinely used in DIII-D experimental studies for over a decade. Combined with results from nonlinear flux-tube gyrofluid simulations [WKM94, WKMH95], GKS was crucial in the development of GLF23 [WSD+97], one of the most popular transport models worldwide. In what follows, we will draw a distinction between local and global simulations. In this context, local means flux-tube. Flux-tube simulations represent the \(\rho_* \rightarrow 0\) limit of global simulations, for which the transport scaling is purely gyroBohm [CWD04] [1]. Here, \(\rho_* = \rho_s/a\) is the ratio of gyroradius to system size, with \(\rho_s\) the ion-sound Larmor radius, and \ (a\) the plasma minor radius. Nonlinear flux-tube ‘’gyrofluid’’ simulations by Beer, Dorland, Hammett, Snyder, Waltz and others provided the key physics discoveries in the mid-1990s, years before the same phenomena were observed in gyrokinetic simulations. Gyrofluid simulations demonstrated that nonlinear, self-generated (zonal) flows control the nonlinear saturation of transport [BH96b, DH93, HBC+94, WKM94, WKMH95], and that equilibrium \(\mathbf{E}\ times\mathbf{B}\) shear can quench transport if the shearing rates are comparable to maximum linear growth rates [WKM94, WKMH95]. Nonlinear gyrofluid codes were the first to treat trapped electron [BH96a] and electromagnetic [SH01] turbulence. The first major contribution from nonlinear gyrokinetic codes was the verification of the importance of Rosenbluth-Hinton (RH) residual flows [RH98] as demonstrated by the flux-tube particle-in-cell (PIC) code PG3EQ [DWBC96]. RH flows were shown to give rise to an upshift in the nonlinear threshold for ITG-ae [2] turbulence. The nonlinear upshift was later named the Dimits shift after its discoverer. The difficulty in properly treating this residual in gyrofluid models was one motivating factor for a switch from gyrofluid to gyrokinetic simulation. To this end, by building upon the GSTOTAL implicit scheme, Dorland and co-workers created the nonlinear Eulerian gyrokinetic code GS2 [DJKR00]. GS2 [DJKR00] was the first nonlinear gyrokinetic code to include the crucial nonadiabatic electron dynamics required for trapped electron mode and electromagnetic physics. Design History Development of GYRO started in 1999. The primary goal was to generalize GS2 by retaining profile-variation effects to allow, in principle, deviations from pure gyroBohm scaling. The numerical methods for GYRO were initially patterned after GS2 wherever possible. In the end, many significant departures from GS2 were required to meet the GYRO design target and to simultaneously increase computational efficiency. By 2001, GYRO had the ability to operate either globally using Dirichlet (zero-value) radial boundary conditions, or locally using flux-tube (periodic) boundary conditions. An implicit-explicit Runge-Kutta (IMEX-RK) integrator was eventually added to overcome the electrostatic-Alfvén wave Courant limit, which can severely limit the timestep for large-domain simulations. Independently, a novel poloidal discretization scheme solved the Ampere cancellation problem [CW03b]. The latter pathology hampered electromagnetic PIC simulation for over a decade. To date, only a single PIC code [CPC+03] has successfully treated finite-\(\beta\) fluctuations with full electron dynamics, and only after implementing an analog of the GYRO scheme. By 2002, GYRO [CW03b] achieved robust operation with its physics design targets. This was demonstrated [CW03a] in realistic simulations of dimensionally similar Bohm scaled DIII-D L-mode discharges. Yearly Publication Synopsis The first application of GYRO was to use the global capability with adiabatic electons to systematically examine the breaking of gyroBohm scaling (including what are now called, somewhat ambiguously, nonlocal or turbulence spreading effects) via profile variation [WCR02]. GYRO numerical algorithms were documented [CW03b], although publication was significantly delayed after the first submission was lost in transit. A significant amount of linear and nonlinear benchmark data related to the Cyclone base case was given, as were parameter scans for nonlinear electromagnetic variants of the Cyclone case. On the experimental side, simulations of DIII-D L-mode discharges were shown to match experimental power flow data within error bars [CW03a] on the ion temperature gradient. These simulations were physically realistic, and included finite-\(\beta\) effects, and collisional electron physics at real mass ratio, equilibrium \(\mathbf{E}\times\mathbf{B}\) and profile shearing, as well as plasma shaping. Turbulent dynamos in the tokamak current-voltage relation [HWC04] were studied, showing that the turbulent dynamo EMF drives large current density corrugations at low-order rational surfaces, but little net current. In other work, we attempted to correct misunderstanding generated by a highly-publicized global \(\rho_*\)-scan [LEHT02] with highly artificial profiles. The single scan appeared to suggest a universal range in \(\rho_*\) marking the transition from Bohm to gyroBohm scaling. GYRO work established that the transition cannot be characterized by a universal curve; rather, the transition is highly dependent on the profile shape [CWD04] [3]. During this period, there were persistent claims from certain groups that transport is depressed near a \(q_\mathrm{min}\)-surface where there is a gap in singular surfaces [KKH+00]. Global GYRO simulations indicated that transport flows tends to vary monotonically across \(q_\mathrm{min}\) surfaces [CWR04] (as expected from linear theory and flux-tube gyrofluid simulations [WKM94, WKMH95]) due to the appearance of nonresonant modes. These modes are absent in some simplified gyrofluid simulations which at first appeared to confirm the barrier hypothesis. The first systematic gyrokinetic study of particle transport and impurity dynamics was made with GYRO as thesis work for a UCSD graduate student. In particular, temperature-gradient-induced particle pinches, thermal and energetic helium ash transport, differential flows in D-T plasmas, and collisional effects on particle pinches were examined [EMCW05]. Scans in temperature and density gradients (moving from ITG- to TEM-dominated transport), \(T_i/T_e\), \(\mathbf{E}\times\mathbf{B}\) and parallel velocity shear [KWC05] were also published. A detailed study of the beta dependence of electron and ion transport was made [Can05]. This latter work documents the so-called beta runaway phenomenon, which occurs at about half of the MHD crictical beta; as of Winter 2009, it remains an unsolved problem in gyrokinetics. Using a profile feedback scheme, simulations starting with DIII-D L-mode profiles successfully (and slightly) relaxed the experimental temperature and density so that simulated power flows matched experimental ones [WCH+05]. This sort of capability was an early landmark in the development of a more comprehensive steady-state gyrokinetic transport code. GYRO simulations also yielded several examples of nonlocal transport [Wal05, WC05]; in particular, turbulence draining from unstable to less unstable (or stable) regions. At this time, the detailed radial structure of nonlinear profile perturbations was also explored: persistent (i.e., time-averaged) structure tied to rational surfaces [Can05, WCH+05] was found when electrons are kinetic. These corrugation structures are electrostatic in nature, and most pronounced for lower-order surfaces \[q = \frac{1}{1}, \frac{2}{1}, \frac{3}{1}, \ldots \; ,\] and are weaker for successively higher-order surfaces, like \[q = \frac{1}{2}, \frac{3}{2}, \frac{5}{2}, \ldots \; .\] The width of these structures is on the order of a few ion gyroradii. GYRO results showed that ITG/TEM turbulence could induce the transport of energetic fusion alpha particles [EMCW06b]. Systematic safety factor, magnetic shear, and MHD alpha parameter scans [KWC06] were carried out. Theory and simulations of gyrokinetic turbulent heating [HW06] were published. GYRO simulations which perfectly project profiles from dimensionally similar DIII-D discharges verified that the L-modes did indeed have Bohm scaling, and that the experimentally-inferred gyroBohm scaling in some H-modes was actually due to experimental profile dissimilarity [WCP06]. The predicted profile corrugations in the electron temperature gradient were observed in \(q_\mathrm{min} = 2\) DIII-D discharges, and the attending \(\mathbf{E}\times\mathbf{B}\) shear layer is believed to be the trigger for low-power reversed shear ITB formation [WABC06]. Density peaking via a particle pinch was demonstrated for a DIII-D L-mode plasma [EMCW06a]. In studies relating to the foundation of gyrokinetic theory, the connection between velocity-space resolution, entropy production and conservation, and numerical dissipation was rigorously demonstrated [CW06], and the parallel nonlinearity was shown to be asymptotically subdominant (as required by the gyrokinetic ordering) to have a negligible effect on energy transport for experimentally-relevant discharges [CWPC06]. The capability to simultaneously treat electron and ion gyroaverages, and thus to perform fully-coupled, multi-scale ITG-ETG simulations, was added to GYRO [CWFH07a]. The results of these simulations allowed us to make significant progress on the problem of electron-scale transport using coupled ITG/TEM-ETG turbulence at a reduced ion-to-electron mass ratio. We were able to The first systematic studies of gyrokinetic momentum transport [WSCH07], including the effects of the angular momentum pinch from \(\mathbf{E}\times\mathbf{B}\) shear as well as the coriolis pinch, were published, and the effect of plasma shape on \(\mathbf{E}\times\mathbf{B}\) shear quenching and transport was studied [KWC07]. The next step in the validation of GYRO was also begun, with the development and application of synthetic BES and CECE diagnostics to allow direct comparisons of GYRO fluctuation predictions against DIII-D core turbulence measurements. The initial results have been presented at a number of conferences and published in multiple journals [HCW+08, WSM+08]. The initial study focused upon modeling a steady, MHD-free L-mode DIII-D discharge, with the primary conclusions that: 1. Using local, fixed-gradient simulations, GYRO could match the experimental (as calculated via a ONETWO power balance analysis) energy flows within experimental gradient uncertainties for \(r/a < 0.6\) (where \(r/a\) denotes normalized toroidal flux), but systematically underpredicted the energy flow at larger radii. 2. When the effects of the synthetic diagnostics were included, local GYRO simulations also accurately reproduced measured fluctuation spectra and correlation lengths at \(r/a = 0.5\), but underpredicted the measured fluctuation amplitudes at \(r/a = 0.75\) by an amount consistent with the underprediction of the energy fluxes. Interestingly, the shapes of the spectra and the correlation functions were still accurately reproduced at \(r/a = 0.75\). What the source of the energy flow underprediction is, why the correlation functions and spectral shapes but not amplitudes are accurately predicted at \(r/a = 0.75\), and whether these results hold for other experimental conditions remain open questions under active investigation at this time. More results related to gyrokinetic turbulent heating [WS08] were published, and a study of GAMs in the context of the drift-wave-zonal-flow paradigm was carried out [WH08] showing that it applies equally well to long-wavelength ITG/TEM and short-wavelength ETG turbulence. Significant progress on neoclassical and steady-state transport physics was also made; these lines of research have a separate wiki. Researchers from MIT were able to demonstrate good agreement between GYRO and C-Mod experiment via synthetic PCI diagnostics [LPE+09]. A Computational Science and User Perspective GYRO owes it’s computational efficiency in part to the strong support from the ORNL Center for Computational Science (CCS). GYRO runs well on a wide range of small clusters to large supercomputers. One can move between platforms seamlessly by setting a single environment variable. GYRO was among the earliest applications ported to the Cray X1 and XT3 at ORNL. The code is modular and the layout is carefully organized. There are few uses of esoteric language features. Initial X1 optimizations to take advantage of multistreaming and vectorization were quite successful for all but the collision operator. A later effort to improve the performance on the collision operator yielded a factor-of-ten improvement on the X1, with an average 10% improvement on IBM and commodity systems. Recent PERC data is available which analyzes GYRO performance on various HPC systems [WCC+05] using the IPM, KOJAK, SvPablo, TAU and PMaC modeling tool suite. Additional GYRO performance data on various systems (including the Cray X1, XD1 and XT3) has been presented by Vetter [VATHD+05], Worley [WATHD+05] and Fahey [FC04]. GYRO is presently so reliable that it is routinely used by ORNL staff to diagnose system hardware and software issues. For example, chassis interconnect problems on the XD1, filesystem slowdown in the XT3, and memory management issues on the SGI Altix. From the point of view of utility, the Eulerian codes GS2 and GYRO are set apart from all other codes in the US program in that they have a large (and growing) non-developer users group. Numerous painstaking simulations of DIII-D, JET, JT60, and NSTX discharges have been made. R. Bravenec and C. Holland have developed synthetic diagnostic tools to analyze GYRO data. In addition to many Cyclone-based scans, we maintain a transport database containing over 300 well-resolved flux tube simulations based on the GA standard case parameters [WSD+97]. Nearly all of these give particle and energy transport coefficients for both electrons and ions. Some also include momentum transport. Additional scans are continuously being added to the database. In our view, compiling a database of simulations is a key practical end-product of nonlinear gyrokinetic simulations. This database provides the benchmarks and validation for the GA advanced gyrofluid transport model TGLF [SKW07, SKW05]. Urban Legends The Eulerian codes GS2 and GYRO have had to confront a number of urban legends mostly in the form of unpublished/unsubstantiated claims circulating within the fusion theory community. These seem to originate from researchers having no first-hand experience with either Eulerian schemes or local simulations. We number these for future reference. UL1: The local gyroBohm limit of global codes differs from local codes This cannot be true. As \(\rho_*\) vanishes, the transport obtained from a global code reaches a limiting value at a given radial location. This limiting value (i.e., the gyroBohm scaled limit) is identical to the local simulation result. This not only provides the physical meaning of a local simulation, but is an important test of validity for local and global codes alike. GYRO has passed this test repeatedly [CWD04, WCR02]. UL2: Full torus simulations are necessary to correctly compute the local transport DIII-D full-physics simulations which span 1/6th of a torus, 1/3rd of a torus, 1/2 of a torus, and a full torus give transport diffusivities which differ only by few percent [WCH+05]. In fact, full-torus simulations are generally wasteful of computer resources. Global codes which are limited to full torus operation could obtain significantly more accurate results by simulating only a fraction of a torus but operating with a higher number of particles per cell and/or spatial resolution. UL3: Eulerian codes have inadequate velocity-space resolution The truth is in fact quite the opposite. Published GYRO simulations are always checked for adequate grid convergence by the standard method of grid refinement. GYRO has a particularly efficient velocity-space discretization scheme which suffers no accuracy loss even when the distribution is nearly discontinuous across the trapped-passing boundary. We typically use 128 velocity gridpoints per real-space cell. This is roughly equivalent to 128 particles per cell (PPC) in terms of points where the distribution function is known. We emphasize that this was, until recently, significantly more than that typically used in PIC simulations [4]. We have verified that no significant fine-scale structure in the distribution is being ignored or ‘’coarse-grained’’. Recent GYRO work [CW06] demonstrates a detailed steady-state balance between production of fluctuations and (numerical) dissipation, thus resolving the entropy paradox in a manner consistent with the picture developed by Krommes [Kro99, KH94]. The numerical dissipation is also shown to be so small that it does not affect the observed transport. UL4: The parallel nonlinearity can have a dramatic effect on the transport This is false for realistic core tokamak parameters. The so-called parallel nonlinearity (a velocity-space nonlinearity which is formally one order smaller in \(\rho_*\) than other terms in the gyrokinetic equations) is only one of several small terms commonly neglected in the standard operation of gyrokinetic codes. GYRO has shown [CWPC06] that the parallel nonlinearity has no statistically significant effect on the diagnosed transport when \(\rho_* < 0.01\). Moreover, the parallel nonlinearity has nothing whatsoever to do with the entropy paradox or with producing steady-states of turbulence. To be clear the parallel nonlinearity (related to the nonlinear Landau damping and to wave-particle trapping) is the physical origin of a small turbulent heating source. GYRO is the first code to diagnostically calculate this heating [HW06].
{"url":"http://gafusion.github.io/doc/gyro/gyro_history.html","timestamp":"2024-11-09T12:47:18Z","content_type":"text/html","content_length":"58264","record_id":"<urn:uuid:fb762a79-a786-4c0e-86fc-c9a6d50c717f>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00673.warc.gz"}
On This Day in Math - May 8 A small circle is quite as infinite as a large circle. G. K. Chesterton The 128th day of the year; 128 is The largest known even number that can be expressed as the sum of two primes in exactly three ways. (Find them) *Prime Curios How many smaller numbers (and which) are there that can be so expressed? But, it can not be expressed as the sum of distinct squares, for any number of squares. And it is the largest such number, ever.... no, I mean EVER. The very last. (Surprisingly, there are only 31 numbers that can not be expressed as the sum of distinct squares. ) 128 can be expressed by a combination of its digits with mathematical operators thus 128 = 2^8 - 1, making it a Friedman number in base 10 (Friedman numbers are named after Erich Friedman, as of 2013 an Associate Professor of Mathematics and ex-chairman of the Mathematics and Computer Science Department at Stetson University, located in DeLand, Florida.) 128 the sum of the factorials of the first three prime numbers, 2! + 3! + 5! =128. 128 = 2^8, so in binary it is a 1 followed by 7 zeros, which makes it also 4^4, and in base 4 its a 2 with three zeros. But it's also 8^2, so in base eight its a 2 with two zeros, 128 is a power of two, and all of its digits are powers of two. I don't know of any other. 128 can be expressed by a combination of its digits with mathematical operators thus 128 = 2^8 - 1, making it a Friedman number in base 10 (Friedman numbers are named after Erich Friedman, as of 2013 an Associate Professor of Mathematics and ex-chairman of the Mathematics and Computer Science Department at Stetson University, located in DeLand, Florida.) 128 the sum of the factorials of the first three prime numbers, 2! + 3! + 5! =128. Some nice relationships between 128 and its digits, 128 + (1+2+8) = 139, a prime number. But 128 + (8 + 1) is 137, also prime, and 128 + (2 + 1) is 131, a prime, AND 128 +( 8+2 ) is not prime, but 138 is between a twin prime pair. ..... And 1*2*8 = 16 is a divisor of 128. And that pair of cousin primes, 127 and 131, are the largest such pair with a power of two (128) between them. The name for a particular 7th dimensional Hyperplex with 128 vertices is a Hepteract. Dazzle your friends. Oh, I told you 128 is the 7th power of two.... but there are no more three digit numbers that are 7th powers... And if you like to keep score, 128 is 6 score and 8. In old commercial terminology, a schock was a lot of 60 items, so 128 is also two shock and 8, or 28 in sexigesimal (base sixty). The number of stalks of corn or wheat (supposedly) gathered and stood on ends in the fields to dry, like in "When the frost is on the Pumpkin and the Fodders in the shock. " 128 is divisible by four so it is the difference of two squares of numbers that differ by 2, and since 128 / 4 = 32, the numbers must straddle 32, 33² - 31² = 128. But it is also divisible by eight, so it is the difference of two squares of numbers that differ by four(there is a power of two relation working here, which students might find). And since 128/8 = 16, 18² - 14² = 128 And in 1968 the 128 K Mac was the hottest desktop computer around. 1654 Otto von Guericke demonstrates the Magdenburg hemispheres in front of the imperial Diet, and the Emperor Ferdinand IIII in Regensburg. The Magdeburg hemispheres, around 50 cm (20 inches) in diameter, were designed to demonstrate the vacuum pump that Guericke had invented. One of them had a tube connection to attach the pump, with a valve to close it off. When the air was sucked out from inside the hemispheres, and the valve was closed, the hose from the pump could be detached, and they were held firmly together by the air pressure of the surrounding atmosphere. Thirty horses, in two teams of fifteen, could not separate the hemispheres until the valve was opened to equalize the air pressure. In 1656 he repeated the demonstration with sixteen horses (two teams of eight) in his hometown of Magdeburg, where he was mayor. He also took the two spheres, hung the two hemispheres with a support, and removed the air from within. He then strapped weights to the spheres, but the spheres would not budge.Gaspar Schott was the first to describe the experiment in print in his Mechanica Hydraulico-Pneumatica (1657). In 1663 (or, according to some sources, in 1661) the same demonstration was given in Berlin before Frederick William, Elector of Brandenburg with twenty-four horses. It is unclear how strong a vacuum Guericke's pump was able to achieve, but if it was able to evacuate all of the air from the inside, the hemispheres would have been held together with a force of around 20000 N (4400 lbf, or 2.2 short tons), equivalent to lifting a car or small elephant; a dramatic demonstration of the pressure of the atmosphere. *Wik 1661 “On 8 May 1661 the Society’s Journal Book notes that ‘a motion was made for the erecting of a library’, and later in the same month ‘it was resolved that every member, who hath published or shall publish any work, give the Society one copy’.” (from Emma Davidson at RSI) The Library and Archives of the Royal Society are open to researchers and members of the public. Access is free of charge. 1698 Henry Baker was born on May 8. His book The Microscope Made Easy (1743) has been described as the first laboratory manual for microscopy. *RMAT 1774 The conjunction of the Planets Jupiter, Mars, Venus, Mercury and the Moon on this date would herald the apocalypse according to a treatise by Eelco Alta, a Frisian clergyman and theologian. However, the apocalypse did not occur, perhaps because the projected conjunction of the heavenly bodies never occurred. One good result attributed to the treatise was the creation of what is now the oldest continuously operating planetarium in the world, the Eise Eisinga Planetarium in the ceiling of his former home in the Netherlands. It is driven by a pendulum clock, which has 9 weights or ponds. The planets move around the model in real time, automatically. (A slight "re-setting" must be done by hand every four years to compensate for the February 29th of a leap year.) In addition to the basic orrery, there are displays of the phase of the moon and other astronomical phenomena. The planetarium includes a display for the current time and date. The plank that has the year numbers written on it has to be replaced every 22 years. To create the gears for the model, 10,000 handmade nails were used. *Wik *collected notes *HT Erik K sent. "This UNESCO world heritage site (since september 2023) is a museum now and open to the public. There you can actually sit inside that bedroom but also have insight giving views above the ceiling. This unique craftsmanship is located in the city of Franeker in the province of Friesland." 1790 The Assembly (French) ordered the Académie des Sciences to standardize weights and measures on 8 May 1790. The Académie appointed a Commission of Lagrange, Borda, Condorcet, Laplace and Tillet to compare the decimal and duodecimal systems. Another Commission, with Monge instead of Tillet, was to examine how to make a standard of length. The Commissions continued functioning through the The traditional French units of measurement prior to metrication were established under Charlemagne during the Carolingian Renaissance. Based on contemporary Byzantine and ancient Roman measures, the system established some consistency across his empire but, after his death, the empire fragmented and subsequent rulers and various localities introduced their own variants. Some of Charlemagne's units, such as the king's foot (French: pied du Roi) remained virtually unchanged for about a thousand years, while others important to commerce—such as the French ell (aune) used for cloth and the French pound (livre) used for amounts—varied dramatically from locality to locality. By the 18th century, the number of units of measure had grown to the extent that it was almost impossible to keep track of them and one of the major legacies of the French Revolution was the dramatic rationalization of measures as the new metric system. The change was extremely unpopular, however, and a metricized version of the traditional units—the mesures usuelles—had to be brought back into use for several decades. Woodcut dated 1800 illustrating the new decimal units which became the legal norm across all France on 4 November 1800 1794 Lavoisier Guillotined along with twenty-seven other members of the Ferme Générale, including his father-in-law. See Deaths below In September 1793 a law was passed ordering the arrest of all foreigners born in enemy countries and all their property to be confiscated. Lavoisier intervened on behalf of Lagrange, who certainly fell under the terms of the law, and he was granted an exception. On 8 May 1794, after a trial that lasted less than a day, a revolutionary tribunal condemned Lavoisier, who had saved Lagrange from arrest, and 27 others to death. Lagrange said on the death of Lavoisier, who was guillotined on the afternoon of the day of his trial:- It took only a moment to cause this head to fall and a hundred years will not suffice to produce its like. 1795 French astronomer Jerome Lalande observes a "star". It is in fact the planet Neptune, which is not officially discovered until 1846. * Liz Suckow@LizMSuckow A second recording, noting a possible error on the 10th was entered. Discovery of these recordings 1n 1947 by Sears C. Walker of the U.S. Naval Observatory led to a better calculation of the planet's orbit. *Wik In 1886, Coca-Cola, the soft drink, was first sold to the public at the soda fountain in Jacob's Pharmacy in Atlanta, Georgia. It was invented by pharmacist, John Stith Pemberton, who mixed it in a 30-gal. brass kettle hung over a backyard fire. Until 1905, the drink, marketed as a "brain and nerve tonic," contained extracts of cocaine as well as the caffeine-rich kola nut. The name, using two C's from its ingredients, was suggested by his bookkeeper Frank Robinson, whose excellent penmanship provided the first scripted "Coca-Cola" letters as the famous logo. Asa Candler marketed Coke to world after buying the company from Pemberton. *TIS 1910 The New York Times Sunday Magazine publishes banner headline, "Fears Of The Comet Are Foolish And Ungrounded," only ten days before the Earth moves into the tail of Halley's Comet. The article featured the famous female astronomer, Mary Proctor, debunking horror stories such as : Here is a gigantic monster in the sky with a head over two hundred thousand miles in width… and a train two million miles in length, rushing through space at the alarming rate of a thousand miles a minute. On May 18 the earth will be plunged in this white hot mass of glowing gas, and, according to the report of the ignorant and superstitious, the world will be set on fire. These sensation makers further say that the oceans on the side facing the comet will be boiled by the intense heat, and the land scorched and blistered as the dread wanderer passes by on its baneful way. Halley’s comet approached Earth and killed England’s King Edward VII, according to some superstitious folk. No one could definitively say how it did, but it certainly did. And that wasn’t its only offense. The Brits also figured it was an omen of a coming invasion by the Germans, while the French reckoned it was responsible for flooding the Seine. A French Scientist named Camille Flammarion, in typical French despair, reckoned that as we passed through the comet’s tail, “cyanogen gas would impregnate the atmosphere and possibly snuff out all life on the planet,” *Wired 1932 The USS Akron, an American dirigible and the world's first purpose-built flying aircraft carrier, flew mail from Lakehurst, New Jersey, to San Diego, On This Day in 1932. The ship reached Camp Kearny in San Diego, on the morning of 11 May and attempted to moor. Since neither trained ground handlers nor specialized mooring equipment were present, the landing at Camp Kearny was fraught with danger. By the time the crew started the evaluation, the helium gas had been warmed by sunlight, increasing lift. Lightened by 40 short tons (36 t), the amount of fuel spent during the transcontinental trip, the Akron was now all but uncontrollable. The mooring cable was cut to avert a catastrophic nose-stand by the errant airship which floated upward. Most of the mooring crew—predominantly "boot" seamen from the Naval Training Station San Diego—released their lines although four did not. One let go at about 15 ft (4.6 m) and suffered a broken arm while the three others were carried further aloft. Of these Aviation Carpenter's Mate 3rd Class Robert H. Edsall and Apprentice Seaman Nigel M. Henton soon plunged to their deaths while Apprentice Seaman C. M. "Bud" Cowart held on to his line until being hoisted on board the airship an hour later. The Akron moored at Camp Kearny later that day before proceeding to Sunnyvale, California. The deadly accident was recorded on newsreel film. *Postal Museum @PostalMuseum *Wik 1961 President J. F. Kennedy presented astronaut Alan B. Shepard the first National Aeronautics and Space Administration Distinguished Flying Medal for making America’s first space flight on May 5, Shephard and Capsule recovered after splashdown 1859 Johan Ludwig William Valdemar Jensen (8 May 1859 – 5 March 1925) contributed to the Riemann Hypothesis, proving a theorem which he sent to Mittag-Leffler who published it in 1899. The theorem is important, but does not lead to a solution of the Riemann Hypothesis as Jensen had hoped. It expresses ... the mean value of the logarithm of the absolute value of a holomorphic function on a circle by means of the distances of the zeros from the centre and the value at the centre. He also studied infinite series, the gamma function and inequalities for convex functions.*SAU 1905 Karol Borsuk (May 8, 1905, Warsaw – January 24, 1982, Warsaw) Polish mathematician. His main interest was topology. Borsuk introduced the theory of absolute retracts (ARs) and absolute neighborhood retracts (ANRs), and the cohomotopy groups, later called Borsuk-Spanier cohomotopy groups. He also founded the so called Shape theory. He has constructed various beautiful examples of topological spaces, e.g. an acyclic, 3-dimensional continuum which admits a fixed point free homeomorphism onto itself; also 2-dimensional, contractible polyhedra which have no free edge. His topological and geometric conjectures and themes stimulated research for more than half a century. *Wik 1905 Winifred Lydia Caunden Sargent (8 May 1905 – October 1979) was an English mathematician. She studied at Newnham College, Cambridge and carried out research into Lebesgue integration, fractional integration and differentiation and the properties of BK-spaces. Sargent's first publication was in 1929, On Young's criteria for the convergence of Fourier series and their conjugates, published in the Mathematical Proceedings of the Cambridge Philosophical Society. In 1931 she was appointed an Assistant Lecturer at Westfield College and became a member of the London Mathematical Society in January 1932. in 1936 she moved to Royal Holloway, University of London, at the time both women's colleges. In 1939 she became a doctoral student of Lancelot Bosanquet, but World War II broke out, preventing his formal supervision from continuing. In 1941 Sargent was promoted to lecturer at Royal Holloway, moving to Bedford College in 1948. She served on the Mathematical Association teaching committee from 1950 to 1954. In 1954 she was awarded the degree of Sc.D. (Doctor of Science) by Cambridge and was given the title of Reader. While at the University of London she supervised Alan J. White in 1959. Bosanquet started a weekly seminar in mathematics in 1947, which Sargent attended without absence for twenty years until her retirement in 1967. She rarely presented at it, and did not attend mathematical conferences, despite being a compelling speaker. Much of Sargent's mathematical research involved studying types of integral, building on work done on Lebesgue integration and the Riemann integral. She produced results relating to the Perron and Denjoy integrals and Cesàro summation. Her final three papers consider BK-spaces or Banach coordinate spaces, proving a number of interesting results. *Wik 1923 Dionisio Gallarati (May 8, 1923 – May 13, 2019) was an Italian mathematician, who specialised in algebraic geometry. He was a major influence on the development of algebra and geometry at the University of Genova. Gallarati published 64 papers between 1951 and 1996. Important amongst his research was the study of surfaces in P3 with multiple isolated singularities. His lower bounds for maximal number of nodes of surfaces of degree n stood for a long time, and exact solutions for large n were still unknown in 2001. In Grassmannian geometry he extended Segre's bound "for the number of linearly independent complexes containing the curve in the Grassmannian corresponding to the tangent lines of a nondegenerate projective curve."[3] He extended the results to arbitrarily dimensioned varieties' tangent spaces, to higher degree complexes, and to arbitrary curves in Grassmannians corresponding to degenerate scrolls. *Wik 1794 Antoine Laurent Lavoisier (26 August 1743 – 8 May 1794) after a trial that lasted less than a day, a revolutionary tribunal condemned Antoine Laurent Lavoisier to death. He was 51 and guillotined on the same afternoon. " It took only a moment to cause this head to fall and a hundred years will not suffice to produce its like." Joseph Louis Lagrange, the day of Lavoisier’s Lavoisier was guillotined in the terror following the French Revolution. In 1778, he found that air consists of a mixture of two gases which he called oxygen and nitrogen. By studying the role of oxygen in combustion, he replaced the phlogiston theory. Lavoisier also discovered the law of conservation of mass and devised the modern method of naming compounds, which replaced the older nonsystematic method. Under the Reign of Terror, despite his eminence and his services to science and France, he came under attack as a former Ferme Générale. In November 1793, all former members of the Ferme Générale including Lavoisier and his father-in-law, were arrested and imprisoned. After a trial that lasted less than a day, they were all found guilty of conspiracy against the people of France and condemned. When Lavoisier requested time to complete some scientific work, the presiding judge was said to have answered "The Republic has no need of scientists." He was guillotined and thrown in a common grave in the Cimetière de Picpus. Mathematician Joseph Louis Lagrange lamented the execution: "It took them only an instant to cut off that head, but France may not produce another like it in a century." About eighteen months following his death, Lavoisier was exonerated by the French government. When his belongings were delivered to his widow, a brief note was included reading "To the widow of Lavoisier, who was falsely convicted." For more about Lavoisier see SomeBeans blog 1779 John Farrar (July 1, 1779 – May 8, 1853) born at Lincoln, Massachusetts. As Hollis professor of mathematics and natural philosophy at Harvard, he was responsible for a sweeping modernization of the science and mathematics curriculum, including the change from Newton’s to Leibniz’s notation for the calculus. *VFR He was an American scholar. He first coined the concept of hurricanes as “a moving vortex and not the rushing forward of a great body of the atmosphere”, after the Great September Gale of 1815. Farrar remained Professor of Mathematics and Natural Philosophy at Harvard University between 1807 and 1836. During this time, he introduced modern mathematics into the curriculum. He was also a regular contributor to the scientific journals. After attending Phillips Academy, Andover, and graduating from Harvard in 1803. In 1805, he was appointed Greek tutor at Harvard. Farrar was chosen Hollis Professor of Mathematics and Natural Philosophy in 1807. He retained the chair till 1836, when he resigned in consequence of a painful illness that finally caused his death. His second wife, Eliza Ware Farrar (née Rotch), was Flemish. She married him in 1828. She authored several children's books. Farrar maintained weather records between 1807 and 1817 at Cambridge, Massachusetts. For the 23 September 1815 hurricane, he particularly noted the shape as "a moving vortex". He also observed the veering of the wind, and its different times of subsequent impacts on the cities of Boston and New York City. This was the first hurricane, although the word had not been created yet, to hit New England in 180 yrs. In the aftermath of the Great Gale, the concept of a hurricane as a "moving vortex" was presented by John Farrar, Hollis Professor of Mathematics and Natural Philosophy at Harvard University. In an 1819 paper he concluded that the storm "appears to have been a moving vortex and not the rushing forward of a great body of the atmosphere". The word "hurricane" comes from Spanish huracán, from the Taino hurakán, “god of the storm.” While the Taino have been essentially wiped out by disease brought by the Spanish, there are still several words from the language remaining in English. Two of my favorites, Barbecue and Hammock. *Assorted sources (The Merriem Webster gives the first use of Hurricane in 1555, the same year as another Taino word, Yuca, was first used in English.)Farrar was elected a Fellow of the American Academy of Arts and Sciences in 1808 and a member of the American Antiquarian Society in 1814. This was the first hurricane, although the word had not been created yet, to hit New England in 180 yrs. In the aftermath of the Great Gale, the concept of a hurricane as a "moving vortex" was presented by John Farrar, Hollis Professor of Mathematics and Natural Philosophy at Harvard University. In an 1819 paper he concluded that the storm "appears to have been a moving vortex and not the rushing forward of a great body of the atmosphere". The word "hurricane" comes from Spanish huracán, from the Taino hurakán, “god of the storm.” While the Taino have been essentially wiped out by disease brought by the Spanish, there are still several words from the language remaining in English. Two of my favorites, Barbecue and Hammock. *Assorted sources (The Merriem Webster gives the first use of Hurricane in 1555, the same year as another Taino word, Yuca, was first used in English.) Mathematical Treasure: Farrar’s Translation of Lacroix’s Algebra MAA 1904 Eadweard Muybridge English photographer important for his pioneering work in photographic studies of motion and in motion-picture projection. For his work on human and animal motion, he invented a superfast shutter. Leland Stanford, former governor of California, hired Muybridge to settle a hotly debated issue: Is there a moment in a horse’s gait when all four hooves are off the ground at once? In 1972, Muybridge took up the challenge. In 1878, he succeeded in taking a sequence of photographs with 12 cameras that captured the moment when the animal’s hooves were tucked under its belly. Publication of these photographs made Muybridge an international celebrity. Another noteworthy event in his life was that he was tried (but acquitted) for the murder of his wife's lover. *TIS 1951 Gilbert Ames Bliss, (9 May 1876, Chicago – 8 May 1951, Harvey, Illinois), was an American mathematician, known for his work on the calculus of variations. Bliss once headed a government commission that devised rules for apportioning seats in the U.S. House of Representatives among the several states. After obtaining the B.Sc. in 1897, he began graduate studies at Chicago in mathematical astronomy (his first publication was in that field), switching in 1898 to mathematics. He discovered his life's work, the calculus of variations, via the lecture notes of Weierstrass's 1879 course, and Bolza's teaching. Bolza went on to supervise Bliss's Ph.D. thesis, The Geodesic Lines on the Anchor Ring, completed in 1900 and published in the Annals of Mathematics in 1902. Bliss was elected to the National Academy of Sciences (United States) in 1916.[1] He was the American Mathematical Society's Colloquium Lecturer (1909), Vice President (1911), and President (1921–22). He received the Mathematical Association of America's first Chauvenet Prize, in 1925, for his article "Algebraic functions and their divisors," which culminated in his 1933 book Algebraic functions. He was also an elected member of the American Philosophical Society and the American Academy of Arts and Sciences. *Wikipedia 1959 Renato Caccioppoli (20 January 1904 – 8 May 1959) His most important works, out of a total of around eighty publications, relate to functional analysis and the calculus of variations. Beginning in 1930 he dedicated himself to the study of differential equations, the first to use a topological-functional approach. Proceeding in this way, in 1931 he extended the Brouwer fixed point theorem, applying the results obtained both from ordinary differential equations and partial differential equations. In 1932 he introduced the general concept of inversion of functional correspondence, showing that a transformation between two Banach spaces is invertible only if it is locally invertible and if the only convergent sequences are the compact ones. Between 1933 and 1938 he applied his results to elliptic equations, establishing the majorizing limits for their solutions, generalizing the two-dimensional case of Felix Bernstein. At the same time he studied analytic functions of several complex variables, i.e. analytic functions whose domain belongs to the vector space Cn, proving in 1933 the fundamental theorem on normal families of such functions: if a family is normal with respect to every complex variable, it is also normal with respect to the set of the variables. He also proved a logarithmic residue formula for functions of two complex variables in 1949. In 1935 Caccioppoli proved the analyticity of class C^2 solutions of elliptic equations with analytic coefficients. The year 1952 saw the publication of his masterwork on the area of a surface and measure theory, the article Measure and integration of dimensionally oriented sets (Misura e integrazione degli insiemi dimensionalmente orientati, Rendiconti dell'Accademia Nazionale dei Lincei, s. VIII, v.12). The article is mainly concerned with the theory of dimensionally oriented sets; that is, an interpretation of surfaces as oriented boundaries of sets in space. Also in this paper, the family of sets approximable by polygonal domains of finite perimeter, known today as Caccioppoli sets or sets of finite perimeter, was introduced and studied. His last works, produced between 1952 and 1953, deal about a class of pseudoanalytic functions, introduced by him to extend certain properties of analytic functions. In his last years, the disappointments of politics and his wife's desertion, together perhaps with the weakening of his mathematical vein, pushed him into alcoholism. His growing instability had sharpened his "strangenesses", to the point that the news of his suicide on May 8, 1959 by a gunshot to the head did not surprise those who knew him. He died at his home in Palazzo Cellammare In 1992 his tormented personality was remembered in a film directed by Mario Martone, The Death of a Neapolitan Mathematician (Morte di un matematico napoletano), in which he was portrayed by Carlo Cecchi. * Wik 1953 Benjamin Fedorovich Kagan (10 March 1869 in Shavli, Kovno (now Kaunas, Lithuania) - 8 May 1953 in Moscow, USSR) Kagan worked on the foundations of geometry and his first work was on Lobachevsky's geometry. In 1902 he proposed axioms and definitions very different from Hilbert. Kagan studied tensor differential geometry after going to Moscow because of an interest in relativity. Kagan wrote a history of non-euclidean geometry and also a detailed biography of Lobachevsky. He edited Lobachevsky's complete works which appeared in five volumes between 1946 and 1951. *SAU 1960 John Henry Constantine Whitehead FRS (11 November 1904 – 8 May 1960), known as "Henry", was a British mathematician and was one of the founders of homotopy theory. He was born in Chennai (then known as Madras), in India, and died in Princeton, New Jersey, in 1960. During the Second World War he worked on operations research for submarine warfare. Later, he joined the codebreakers at Bletchley Park, and by 1945 was one of some fifteen mathematicians working in the "Newmanry", a section headed by Max Newman and responsible for breaking a German teleprinter cipher using machine methods.Those methods included the Colossus machines, early digital electronic From 1947 to 1960 he was the Waynflete Professor of Pure Mathematics at Magdalen College, Oxford. He became president of the London Mathematical Society (LMS) in 1953, a post he held until 1955. The LMS established two prizes in memory of Whitehead. The first is the annually awarded, to multiple recipients, Whitehead Prize; the second a biennially awarded Senior Whitehead Prize *Wik 2016 Tom Mike Apostol (/əˈpɑːsəl/ ə-POSS-əl; August 20, 1923 – May 8, 2016) was an American analytic number theorist and professor at the California Institute of Technology, best known as the author of widely used mathematical textbooks. Apostol received his Bachelor of Science in chemical engineering in 1944, Master's degree in mathematics from the University of Washington in 1946, and a PhD in mathematics from the University of California, Berkeley in 1948. Thereafter Apostol was a faculty member at UC Berkeley, MIT, and Caltech. He was the author of several influential graduate and undergraduate level textbooks. Apostol was the creator and project director for Project MATHEMATICS! producing videos which explore basic topics in high school mathematics. He helped popularize the visual calculus devised by Mamikon Mnatsakanian with whom he also wrote a number of papers, many of which appeared in the American Mathematical Monthly. Apostol also provided academic content for an acclaimed video lecture series on introductory physics, The Mechanical Universe. In 2001, Apostol was elected in the Academy of Athens. He received a Lester R. Ford Award in 2005, in 2008, and in 2010. In 2012 he became a fellow of the American Mathematical Society A favorite of mine... Credits : *CHM=Computer History Museum *FFF=Kane, Famous First Facts *NSEC= NASA Solar Eclipse Calendar *RMAT= The Renaissance Mathematicus, Thony Christie *SAU=St Andrews Univ. Math History *TIA = Today in Astronomy *TIS= Today in Science History *VFR = V Frederick Rickey, USMA *Wik = Wikipedia *WM = Women of Mathematics, Grinstein & Campbell
{"url":"https://pballew.blogspot.com/2024/05/on-this-day-in-math-may-8.html","timestamp":"2024-11-05T00:27:29Z","content_type":"application/xhtml+xml","content_length":"181195","record_id":"<urn:uuid:dd7f645d-3258-4337-87a8-f9e855129220>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00094.warc.gz"}
Is part of the Bibliography Conditional Sums-of-AM/GM-Exponentials (conditional SAGE) is a decomposition method to prove nonnegativity of a signomial or polynomial over some subset X of real space. In this article, we undertake the first structural analysis of conditional SAGE signomials for convex sets X. We introduce the X-circuits of a finite subset A⊂Rn , which generalize the simplicial circuits of the affine-linear matroid induced by A to a constrained setting. The X-circuits serve as the main tool in our analysis and exhibit particularly rich combinatorial properties for polyhedral X, in which case the set of X-circuits is comprised of one-dimensional cones of suitable polyhedral fans. The framework of X-circuits transparently reveals when an X-nonnegative conditional AM/GM-exponential can in fact be further decomposed as a sum of simpler X-nonnegative signomials. We develop a duality theory for X-circuits with connections to geometry of sets that are convex according to the geometric mean. This theory provides an optimal power cone reconstruction of conditional SAGE signomials when X is polyhedral. In conjunction with a notion of reduced X-circuits, the duality theory facilitates a characterization of the extreme rays of conditional SAGE cones. Since signomials under logarithmic variable substitutions give polynomials, our results also have implications for nonnegative polynomials and polynomial optimization. Sublinear circuits are generalizations of the affine circuits in matroid theory, and they arise as the convex-combinatorial core underlying constrained non-negativity certificates of exponential sums and of polynomials based on the arithmetic-geometric inequality. Here, we study the polyhedral combinatorics of sublinear circuits for polyhedral constraint sets. We give results on the relation between the sublinear circuits and their supports and provide necessary as well as sufficient criteria for sublinear circuits. Based on these characterizations, we provide some explicit results and enumerations for two prominent polyhedral cases, namely the non-negative orthant and the cube [−1,1]n. The problem of unconstrained or constrained optimization occurs in many branches of mathematics and various fields of application. It is, however, an NP-hard problem in general. In this thesis, we examine an approximation approach based on the class of SAGE exponentials, which are nonnegative exponential sums. We examine this SAGE-cone, its geometry, and generalizations. The thesis consists of three main parts: 1. In the first part, we focus purely on the cone of sums of globally nonnegative exponential sums with at most one negative term, the SAGE-cone. We ex- amine the duality theory, extreme rays of the cone, and provide two efficient optimization approaches over the SAGE-cone and its dual. 2. In the second part, we introduce and study the so-called S-cone, which pro- vides a uniform framework for SAGE exponentials and SONC polynomials. In particular, we focus on second-order representations of the S-cone and its dual using extremality results from the first part. 3. In the third and last part of this thesis, we turn towards examining the con- ditional SAGE-cone. We develop a notion of sublinear circuits leading to new duality results and a partial characterization of extremality. In the case of poly- hedral constraint sets, this examination is simplified and allows us to classify sublinear circuits and extremality for some cases completely. For constraint sets with certain conditions such as sets with symmetries, conic, or polyhedral sets, various optimization and representation results from the unconstrained setting can be applied to the constrained case. The 𝒮-cone provides a common framework for cones of polynomials or exponen- tial sums which establish non-negativity upon the arithmetic-geometric inequality, in particular for sums of non-negative circuit polynomials (SONC) or sums of arithmetic- geometric exponentials (SAGE). In this paper, we study the S-cone and its dual from the viewpoint of second-order representability. Extending results of Averkov and of Wang and Magron on the primal SONC cone, we provide explicit generalized second- order descriptions for rational S-cones and their duals.
{"url":"https://publikationen.ub.uni-frankfurt.de/opus4/solrsearch/index/search/searchtype/authorsearch/author/Helen+Naumann","timestamp":"2024-11-12T12:07:27Z","content_type":"application/xhtml+xml","content_length":"32974","record_id":"<urn:uuid:342e96dc-7688-4d1d-b5e4-336dbd7727c6>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00483.warc.gz"}
Parametric Analysis of a High Temperature Packed Bed Thermal Storage Design for a Solar Gas Turbine Parametric Analysis of a High Temperature Packed Bed Thermal Storage Design for a Solar Gas Turbine P. Klein^a,b,∗, T.H. Roos^b, T.J. Sheer^a aSchool of Mechanical, Industrial and Aeronautical Engineering, University of the Witwatersrand, Johannesburg, South Africa bCouncil for Scientific and Industrial Research, Pretoria, South Africa The development of a high temperature Thermal Energy Storage (TES) system will allow for high solar shares in Solar Gas Turbine (SGT) plants. In this re- search a pressurised storage solution is proposed that utilises a packed bed of alumina spheres as the storage medium and air from the gas turbine cycle as the heat transfer fluid. A detailed model of the storage system is developed that accounts for transient heat transfer between discrete fluid and solid phases. The model includes all relevant convective, conductive and radiative heat transfer mechanisms and is validated against high temperature experimental data from a laboratory scale test facility. The validated model is further utilised to con- duct a parametric design study of a nominal six hour TES (1.55 MWhth) for a micro-gas turbine. The concepts of utilisation factor and storage efficiency are introduced to determine the optimal storage design. The results of the study indicate that a storage efficiency of 88% and utilisation factor of 85% can be achieved when combining thermal storage and hybridisation with fossil fuels. Keywords: Concentrating solar power, Thermal storage, Packed bed, Sensible heat, Gas turbine ∗Corresponding Author: Peter Klein, [email protected], +27-12-841-3631 1. Introduction A concentrated solar-hybrid gas turbine cycle offers new opportunities for off-the-grid and grid supportive power generation. This technology has the potential to provide fully dispatchable power, while remaining cost competitive with diesel generators. Recuperated gas micro-turbines are applicable to off- the-grid applications as they can be manufactured in the 100-350 kWe range, with cycle efficiencies of up to 30%. They also require no cooling water and are readily hybridisable with fossil fuels. As stated by Klein et al. (2014a), the overall plant efficiency can be further increased by employing an absorption chiller or multi effect desalination unit, powered by thermal energy exhausted from the gas turbine. During periods when insolation is not available, hybridised Solar Gas Tur- bines (SGT) can produce power by operating on backup fossil fuel. However, in order to produce dispatchable power at high solar shares, a high tempera- ture Thermal Energy Storage (TES) system is proposed. As demonstrated by Amsbeck et al. (2010) the inclusion of a nine hour TES could increase the solar share of a gas micro-turbine from 25% to 82%. Ceramic heat regenerators are suited to high temperature TES applications where the heat transfer fluid is air. Packed bed regenerators exhibit high heat transfer rates, offering the potential for efficient, low cost storage (Zunft et al., 2014). This technology was successfully integrated into a Concentrating Solar Power (CSP) plant during the TSA-PHOEBUS (Technology program Solar Air Receiver) project. Too et al. (2012) conducted a review of suitable TES for gas turbines and found a packed bed to be an efficient solution in the near term. Amsbeck et al. (2010) also proposed the use of a pressurised packed bed TES for a gas micro-turbine. This research is focussed on the design analysis of a pressurised packed bed regenerative thermal storage system for a gas micro-turbine. A comprehensive storage model is developed and validated against experimental data. The val- idated model is used to conduct a parametric design study to determine the optimal storage configuration for a 1.55 MWhth (nominal six hours for the cho- sen turbine) thermal storage system. 2. Hybridised solar gas turbine cycle The proposed SGT cycle is based on the Turbec T100 gas turbine. This com- mercially available micro-turbine is designed for the co-generation of electricity and hot water (Turbec, 2006). The T100 was utilised in the EUFP7 SOLHYCO project (SOlar HYbrid CO-generation) that developed a SGT system based on a high temperature tubular receiver (Amsbeck et al., 2010). AORA solar cur- rently operates two solar gas turbine plants based on the T100 (Aora, n.d). The turbine produces 100 kW[e] of electricity and 170 kW[th] of thermal energy at standard ISO conditions, with a turbine inlet temperature of 950℃ and pressure ratio of 4.5. A schematic of the cycle, including pressurised thermal storage, is presented in Fig. 1. Figure 1: Schematic of micro-gas turbine cycle with thermal storage The cycle consists of a recuperated gas micro-turbine that is converted to accept thermal energy from a pressurised air receiver. Hybridisation is achieved by placing the combustion chamber from the gas turbine in series with the solar receiver, thus allowing boosting of the turbine inlet temperature. The thermal storage is connected in a parallel configuration with the receiver. Excess thermal energy is transferred to the storage using a blower that re-circulates air through the receiver and packed bed. Energy is withdrawn from the storage by diverting the turbine airflow through the packed bed (Klein et al., 2014b). 3. Literature review on high temperature packed bed thermal storage Forced convection heat transfer in randomly packed beds has received sig- nificant research attention over the past decades. Early studies were based on the analytical solutions to the Schumann model (Schumann, 1929; Klinkenberg, 1948). Due to the limited applicability of the analytical solutions, further in- vestigations relied on numerical modelling (Beasley and Clark, 1984; Ismail and Stuginsky, 1999). Wakao and Kaguei (1982) provide an overview of convective heat transfer models and constitutive correlations for randomly packed beds. Previous research papers are predominately focussed on low temperature sys- tems with a maximum temperature below 200℃. Fewer studies are available on heat transfer in packed beds where the charging temperature exceeds 600℃. 3.1. Gas heat transfer fluid Jalalzadeh-Azar et al. (1996) developed a high temperature test facility to study the heat transfer in a packed bed of zirconia pellets for industrial heat recovery. The heat transfer fluid in the system was flue gas from a 350 kWth burner for the charging cycle and air for the discharging cycle. The effects of gas radiation and intra-particle conduction were studied. The convective heat transfer correlations of Bradshaw et al. (1970) and Wakao and Kaguei (1982) were found to be suitable up to a temperature of 960℃. Adebiyi et al. (1998) also presented a numerical model of this zirconia TES system which was validated against experimental data up to a temperature of 800℃. The model was further used to conduct a parametric analysis of the system based on first and second law efficiency metrics. A packed bed of composite salt/ceramic particles was developed by the Ger- man Aerospace Institute (DLR), Didier Ceramics and Stuttgart University for high temperature thermal storage systems (Tamme et al., 1990). The micro- encapsulation of a phase change material within the pores of a ceramic structure was aimed at improving the energy density of the storage material. Compos- ite SiO[2]/Na[2]SO[4] pellets were successfully developed and tested with respect to thermal performance and physical stability. However, testing conducted by Jalalzadeh-Azar et al. (1997) indicated that a SiO[2]/Na[2]SO[4] material was infe- rior to zirconia in high temperature packed bed TES experiments. The testing involved charging and discharging cycles, using flue gas and air as the respective heat transfer fluids. Gl¨uck et al. (1991) developed a packed bed test facility to test heat storage materials up to a maximum temperature of 1300 ℃. The storage system was charged using flue gas from a burner at atmospheric pressure and discharged us- ing air at a pressure of 21 bar, thus demonstrating the operation of a pressurised packed bed storage system. The heat transfer in a high temperature packed bed was analysed by Du Toit et al. (2006) for nuclear applications. Detailed heat transfer and fluid flow mod- els were presented for the system and solved using the systems based network CFD code FLOWNEX. The model was validated against experimental data from the SANA experiments, conducted by Niessen and St¨ocker (1997). Both nitrogen and helium were tested as heat transfer fluids for the case of natural convection heat transfer in the packed bed. A high temperature rock bed thermal storage system was studied by Zan- ganeh et al. (2012) using air as the heat transfer fluid at temperatures ranging between 20℃ and 650℃. A numerical model of the system was formulated and validated against experimental data from a pilot scale test facility. The modelling accounted for temperature dependent changes in the thermophysical properties. A design study was conducted for an array of two 7.2 GWh[th]packed bed storage units. A parametric analysis of a high temperature TES system for solar thermal applications was presented by Mongibello et al. (2013). The system utilised CO2 as the heat transfer fluid and packed beds of both alumina and zirconia particles were analysed over the temperature range 670-850℃. The numerical heat transfer model was validated against experimental data over the range 100-350℃. Avila-Marin et al. (2014) described a high temperature regenerative thermal storage system for CSP plants based on air cooled receivers. This research was aimed at characterising four types of commercially available alumina spheres in a laboratory scale test facility. A numerical model of the system was developed and compared to experimental data up to a maximum temperature of 640℃. 3.2. Liquid heat transfer fluid Thermocline energy storage in packed beds has also been studied using molten salt as the heat transfer fluid. Typical molten salts used in CSP ap- plications, such as nitrate Solar Salt or HITEC salt, are limited to a maximum temperature below 600℃(Bradshaw and Siegel, 2008). Therefore they cannot be used with the gas turbine cycle. However, the modelling and parametric studies on the packed bed heat transfer are related to the current work. Thus a brief literature review of this research is included for completeness. Pacheco and Showalter (2002) analysed a packed bed, molten salt TES using the one-dimensional Schumann equations. The results of this study predicted a theoretical storage capacity of 69%. Modi and P´erez-Segarra (2014) also em- ployed a one-dimensional heat transfer model that included thermal losses from the bed and temperature dependent thermophysical properties of the molten salt. Cyclic behaviour of the system was analysed for three different liquid heat transfer fluids. Yang and Garimella (2010a) developed a detailed axisymmetric model of the heat transfer and fluid flow in a packed bed molten salt TES system. This model was used to investigate the effects of tank height, salt flow rate and particle size on the discharging behaviour of an adiabatic system. Yang and Garimella (2010b) extended this model to include a non adiabatic boundary condition at the tank Xu et al. (2012a) investigated the effects of different empirical correlations for the interstitial heat transfer coefficient and effective thermal conductivity in a packed bed. This modelling indicated that the use of different empirical constitutive correlations had a negligible effect on the predicted thermal per- formance. Decreasing the convective heat transfer coefficient or increasing the thermal conductivity of the particles was shown to decrease the discharging time and TES efficiency. This model was used by Xu et al. (2012b) to analyse the effects of porosity, flow rate, inlet temperature and thermal losses on the thermal performance of the packed bed molten salt TES system. 4. Mathematical modelling of TES system 4.1. Governing equations The simulated storage system consists of a cylindrical packed bed with an insulated wall. The governing equations are formulated in a two-dimensional, axisymmetric coordinate system, using a volume averaged approach. The dy- namic two-phase model accounts for discrete fluid and solid phases which ex- change energy via convection. The individual solid particles are modelled as a single continuous phase, assuming no intra-particle temperature gradients. The thermophysical properties of the fluid and solid phases are assumed temperature invariant. As described by Zanganeh et al. (2012), the dominant temperature dependent property is the solid phase heat capacity. The temperature dependent heat capacity of candidate ceramic storage materials is presented by Klein et al. (2014b). The materials considered in the current study undergo small changes in heat capacity when the temperature range of analysis is above 350℃. Thus the assumption of constant thermophysical properties improves the simulation time and does not introduce solution errors. This was confirmed by comparing the constant property model to a variable property model. The energy equations for the fluid and solid phases are given by: ∂t +ρfcfUz ∂z =hfsap(Ts−Tf) +∇ ·(keff,f∇Tf) (1) ∂t =hfsap(Tf−Ts) +∇ ·(keff,s∇Ts) (2) The energy equation for the packed bed wall is: ∂t =∇ ·(kw∇Tw) (3) 4.2. Constitutive equations Due to the confining effect of the wall on randomly packed particles, non- uniform radial variations in void fraction occur in packed beds. This effect is included in the current model through the correlation of Hunt and Tien (1990) where: (r) =[∞] 1 + 1−[∞] [∞] −6R−r d[p] (4) The higher void fraction at the bed wall introduces a velocity channelling effect, whereby the fluid velocity is higher in the near wall region than in the centre of the bed. Assuming axisymmetric and fully developed flow, the profile is resolved by solving the extended Brinkman equation as proposed by Vortmeyer and Schuster (1983), where: L[z] =−f1(r)Uz(r)−f2(r) (Uz(r))^2+µeff (5) where f1(r) =CA (1−(r))^2 (r)^3 d^2[p] (6) f[2](r) =C[B](1−(r)) (r)^3 ρ[f] dp (7) subject the the boundary conditions: ∂r = 0 (8) Uz(R) = 0 (9) The coefficientsCAandCBare given the values of 150 and 1.75 respectively. Authors such as Macdonald et al. (1979) have proposed different values for these coefficients. However, these only influence pressure drop but not the shape of the channelling profile. The effective viscosity is calculated using the correlation proposed by Giese et al. (1998). The inter-phase heat transfer coefficient is calculated using the correlation of Gunn (1978), which is valid over the void fraction range 0.35≤≤1 and up to a Reynolds number of 10^5: Nup= 7−10+ 5^2 1 + 0.7Pr^0.33Re^0.2[p] + 1.33−2.4+ 1.2^2 Following from Jeffreson (1972), the isothermal particle assumption can be relaxed by modifying the inter-phase heat transfer coefficient according to hfs= hp 1 + 0.2Bi[p] (11) The heat transfer surface area per unit volume of the particles is: ap =6 (1−) d[p] (12) The inter-particle heat transfer by conduction and radiation is calculated using the Zehner-Bauer-Schl¨under (IAEA, 2001) model that accounts for three modes of heat transfer: keff,s=k^sr[eff]+k^sf[eff]+k^sc[eff] (13) where k[eff]^sr represents the combined heat transfer by radiation between par- ticles and by conduction through the particles;k[eff]^sf represents heat transfer by conduction through the particles and the stagnant fluid; and k^sc[eff] represents the heat transfer by conduction through the particles and the contact points between particlesk[eff]^sc (Visser et al., 2008). The effective fluid conductivity term represents the heat transport through axial and radial dispersion (braiding effect) and is given by: keff,f =CdRepPr (14) Experimental data indicates that the thermal dispersion effect is anisotropic. As described by Beasley and Clark (1984), the value ofCd is 0.1 in the radial direction and between 0.2 and 1 in the axial direction. Jalalzadeh-Azar et al. (1996) neglected axial dispersion in the modelling of a high temperature packed bed. Wakao and Kaguei (1982) recommend a value of 0.5 in the axial direction and 0.1 in the radial direction. Values of 0.3 and 0.1 were chosen for the axial and radial dispersion coefficients respectively, as these best represented the ex- perimental data. Bothk[eff,f] andk[eff,s]were modified within the near wall region to account for the local porosity increase. 4.3. Boundary and initial conditions The computational domain is divided into two regions, namely the packed bed and container wall. The boundary conditions for the simulations are pre- sented in Fig. 2. The modelled wall consists of an inner insulation layer and an outer steel pressure vessel with an assumed thickness of 10 mm. The insulation layer is made from micro-porous insulation material. The model includes the exchange of energy between the solid pebbles and the wall, based on the work of Visser et al. (2008). In order to suppress natural convection effects the air flow enters from the top of the packed bed during charging and from the base during discharging. The charging temperature (red) is 950℃ while the discharging inlet temperature is 600℃(blue). 4.4. Numerical discretisation The governing energy equations are solved using the numerical technique of Orthogonal Collocation on Finite Elements (OCFE), with implicit time step- ping. Temporal discretisation is achieved using the second order backward dif- ference scheme: ∂t = 3T^n−4T^n−1+T^n−2 2∆t (15) OCFE is advocated for resolving partial differential equations involving steep gradients in localised sections of the problem domain (Finlayson, 1980). This method combines the high accuracy of orthogonal collocation with the flexibility of the finite element method. The cubic Hermite polynomials are chosen as the basis function for the collocation procedure. These functions have continuous point values and first derivatives at the element boundaries, making them an Symmetry axis Bed Wall Figure 2: Boundary conditions for packed bed and wall regions efficient basis function for OCFE. For example, the fluid temperature in theabth element, whereris located betweenra andra+1andz betweenzb andzb+1, is approximated by: T[f]^ab(r, z, t) = i=1 4 φ^ab[ij](t)Hi(u)Hj(v) (16) where u= r−ra (17) and v= z−z[b] zb+1−zb The Hermite polynomials are defined on the unit square as: H1(u) = (1 + 2u)(1−u)^2 (19) H2(u,∆ra) =u(1−u)^2∆ra (20) H[3](u) = (3−2u)u^2 (21) H4(u,∆ra) = (u−1)u^2∆ra (22) 4.5. Comparison of developed model with existing models The present model is comprehensive and addresses all relevant heat transfer mechanisms in the high temperature packed bed. Two key areas exist where there is an improvement over existing packed bed heat transfer models: 1) the inclusion of the local porosity increase at the bed wall and the associated wall channelling flow effect; 2) the in-depth treatment of radiation in the packed bed, via the modified ZBS effective conductivity correlation and the radiation boundary condition between solid particles and the bed wall. 5. TES model validation A high temperature packed bed test facility was developed to validate the TES model. The facility utilises a 45 kW[th]LP gas burner to heat a packed bed of alumino-silicate particles. A diagram of the test facility is shown in Fig. 3. Details of the testing programme are provided in Klein et al. (2014b). Tests were conducted over the temperature ranges 350-900℃ and 600-900℃. A blower failure during the 600-900℃ heating test resulted in a limited charging data set. Thus for the purposes of this paper the model was validated using the experimental data over the 350-900℃range. Although this temperature range is large, the normalised changes in solid heat capacity are small and therefore the constant property assumption remains valid. A full cooling test was achieved over the 600-900 ℃ temperature range, and the trends in experimental and numerical data are consistent with those presented in Fig. 4. 400 mm 200 mm 50 mm 620 mm Blower Conical Inlet Flowmeter Gas Burner and Combustion Chamber Test Section Base Support Grid Ceramic Blanket Insulation 150 mm Exhaust port Thermocouple Ceramic Fibreboard Figure 3: Diagram of the high temperature packed bed test facility (Klein et al., 2014b) 5.1. Axial temperature profiles Figure 4 shows the predicted and measured axial temperature profiles along the packed bed centreline for the heating and cooling tests. There is good agreement between the TES model and the experimental data. The shape and position of the thermocline in the test facility is well captured by the model. The model underestimates the rate of heating/cooling at the measurement position z/Lz= 0.18. This is caused by a lack of a diffuser at the packed bed inlet, which introduces a jet effect onto the top layers of particles. As the flow penetrates deeper into the bed this effect dissipates. For the measurement positions where z/L[z]≥0.34 there is good agreement between the model and experimental data for both heating and cooling cycles. Therefore the model can be used with confidence to predict the axial temperature front in a high temperature packed bed TES. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 300 Axial Length [z/Lz] Solid Temperature [o C] 0 hrs 0.25 hrs 0.5 hrs 0.75 hrs 1 hrs 1.25 hrs 1.5 hrs 1.75 hrs 2 hrs numerical Student Version of MATLAB (a) Heating test over 350-900℃temperature range, ˙mf=83.9 kg/h 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Axial Length [z/Lz] Solid Temperature [o C] 0 hrs 0.25 hrs 0.5 hrs 0.75 hrs 1 hrs 1.25 hrs 1.5 hrs 1.75 hrs 2 hrs numerical Student Version of MATLAB (b) Cooling test over 350-900℃temperature range ˙mf=90.8 kg/h Figure 4: Comparison between predicted and measured axial solid temperature profiles 5.2. Radial temperature profiles Temperature measurements were taken at the centre of a pebble adjacent to the bed wall (r = 0.19 m) on three axial levels in the packed bed. These measurements are compared to the solid temperature at the bed centreline for each level in Fig. 5. Despite the complex heat transfer at the bed wall, the predicted temperature profiles match the measurements with good accuracy. During the initial stages of heating, the temperature of the solid close to the wall increases faster than in the core region of the packed bed. This is due to the preferential flow in the near wall region. As the temperature of the bed increases, the rate of lateral heat losses through the wall also increases. This causes a shift in the temperature profiles over time to one dominated by heat losses at the wall. In the cooling cycle, the wall channelling and heat losses preferentially cool the near wall region, causing a larger difference between the centreline and wall region solid 5.3. Measurement uncertainties The uncertainty calculated for the conical inlet discharge coefficient was 1.9%, leading to an uncertainty on the mass flow rate of 3.9%. The tempera- ture measurement error on the Type-K thermocouples was taken as the standard reference of 0.75% of the measured temperature value. The uncertainty associ- ated with the placement of the fluid and solid thermocouples was half a particle diameter (≈10 mm) in the axial direction. At the bed centreline the placement uncertainty of the thermocouples was±5 mm in the radial direction and at the bed wall±2 mm in the radial direction. 6. Material selection Due to the high temperature nature of the TES system only ceramic filler materials are considered in this analysis. Suitable storage materials have the following characteristics: 1) high volumetric heat capacity; 2) high operating temperature capability (1000℃); 3) good thermal conductivity; 4) resistance to thermal shock; 5) low cost. Time [s] Solid Temperature [ o C ] Exp(0.21,0) Exp(0.31,0) Exp(0.41,0) Exp(0.21,0.19) Exp(0.31,0.19) Exp(0.41,0.19) Num(0.21,0) Num(0.31,0) Num(0.41,0) Num(0.21,0.19) Num(0.31,0.19) Num(0.41,0.19) Student Version of MATLAB (a) Heating test over 350-900℃temperature range, ˙mf=83.9 kg/h Time [s] Solid Temperature [ o C ] Exp(0.21,0) Exp(0.31,0) Exp(0.41,0) Exp(0.21,0.19) Exp(0.31,0.19) Exp(0.41,0.19) Num(0.21,0) Num(0.31,0) Num(0.41,0) Num(0.21,0.19) Num(0.31,0.19) Num(0.41,0.19) Student Version of MATLAB (b) Cooling test over 350-900℃temperature range, ˙mf=90.8 kg/h Figure 5: Comparison between predicted and measured radial solid temperature profiles Alumina and zirconia are recommended in the literature as suitable high temperature TES materials (Mongibello et al., 2013; Avila-Marin et al., 2014). The thermophysical properties of these candidate materials are presented in Table 1. Alumina spheres can be sourced in different grades of purity. High- alumina spheres typically have an alumina content greater than 92%, while lower purity alumina spheres are an alumino-silicate material which is predominately SiO[2]. Due to the pressurised nature of the storage there is a strong economic driver to reduce the volume of the storage. The density of the high alumina spheres is between 3.6 kg/m^3 and 3.9 kg/m^3, depending on the purity and sintering process. Zirconia was advocated by Jalalzadeh-Azar et al. (1996) and Adebiyi et al. (1998) as an efficient high temperature thermal storage medium. Yttria stabilised zirconia has a density of between 5.4 kg/m^3 and 6 kg/m^3. In order to estimate the costs of each material a series of quotes were obtained from ceramic manufacturers. The costs provided in Table 1 represent an average of the quotes obtained (excluding shipping costs). Bindra et al. (2013) estimate the cost of alumina spheres for TES at $100/ft^3 ( = 0.35, ρ[s] = 3.9 g/cm^ This equates to a cost of≈1.4$/kg, which is in-line with the estimation made in this work. Zirconia as a heat storage material is more expensive than alumina for two reasons: 1) the higher cost of the raw material and 2) the lower specific heat capacity. In order to store the same amount of energy the storage mass of a zirconia bed would be double that of alumina, while the alumina bed maintains a better volumetric energy storage density. Thus even if the cost of the zirconia material per kilogram was comparable with the alumina, the overall material cost would be double that of the alumina bed. Thus alumina was chosen as the storage material for further analysis due to its cost effectiveness. Table 1: Heat storage properties of candidate ceramic core materials Material ρ[s] c[s] k[s] Q[vol] Cost Cost [kg/m^3] [J/kgK] [W/mK] [kWh[th]/m^3]^1 [$/kg] [$/kWh[th]]^1 Alumina 3.6 1.23 5.4 258 1.3 10.9 Zirconia 5.4 0.62 2.5 195 25.2 418 1 Calculated assuming= 0.4, ∆T = 350 K 7. Parametric design study The validated TES model was used to conduct a parametric design study for a six hour TES for the Turbec T100 turbine. The plant location was Pretoria, South Africa, with an elevation of 1366 m above sea level. The study involved fixing the volume of the packed bed at 7 m^3, while varying the bed aspect ratio (α = Lz/D) and particle diameter. The volume was calculated assuming a nominal six hour storage system. The aspect ratios that are considered in this work are: α = 1,2,3,4,5. For each aspect ratio four particle diameters are simulated: d[p]= 10,16,25,50 mm. 7.1. Solar multiple and plant data The Solar Multiple (SM) is defined as the thermal power generated by a given solar field at design point, relative to the thermal power required to op- erate the turbine at the design point. A SM greater than one is required when implementing thermal storage. The plant design used in this research is based on work conducted by the Council for Scientific and Industrial Research and the German Aerospace Center (DLR) as part of the Integrated Resource Infras- tructure Platform (IRIP) (Roos et al., 2015). The power block parameters for the Turbec T100PH gas turbine are: • Compressor inlet air temperature and pressure: 25℃ and 88 kPa • Turbine power at operating conditions: 73 kWe • Turbine efficiency at operating conditions: 28.2% The reduction in turbine power from 100 kWeis due to the operating condi- tions. Although the altitude is constrained by location, the turbine power and efficiency could be increased to 83 kWeand 29.7% respectively by implementing inlet air cooling to 15℃. This is not considered at present. The solar collection system parameters are: • Heliostat area: 13.4 m^2 • Annual heliostat field efficiency: 0.67% • Design point Direct Normal Irradiance (DNI): 950 W/m^2 • High temperature receiver efficiency: 70 % • Receiver and storage inlet pressure: 396 kPaabs Hourly DNI data, in the form of a Typical Meterological Year (TMY) is avail- able from the International Weather for Energy Calculation (IWEC) database for Johannesburg. This city is located 50 km from the plant location and expe- riences similar weather patterns. The TMY allows for performance calculations based on statistically probable weather data. The excess energy available for storage is calculated using: E[stored]= Z t[c] dt (23) Four SMs were investigated to determine a viable plant design (excluding cost analysis), able to incorporate the proposed thermal storage system. The DNI profile for each day in the TMY was analysed and the excess energy avail- able for storage was calculated using Eq.(23). The thermal energy was then converted into a nominal hourly value using the turbine thermal power require- ment ˙Q[turb]. As shown in Fig. 6, six hours of thermal storage is not feasible with a solar multiple of 1.5 or 1.8. For a SM of 2.1, 31% of days in the TMY will supply six or more hours of thermal storage, while 52% of days will supply three or more hours of thermal storage. If the SM is increased to 2.4, 41% of Percentage of days in TMY [%] Storage hours greater than or equal SM=2.4 SM=2.1 SM=1.8 SM=1.5 Student Version of MATLAB Figure 6: Number of days in TMY with storage greater than or equal to hourly values between 1 and 9 hours days will supply six or more hours of thermal storage, and 57% of days will provide three or more hours of thermal storage. From an energy yield perspective it is naturally beneficial to maximise the SM. However the overall capital cost of the plant is strongly dependent on the choice of SM. This is primarily related to the size of the required heliostat field. A detailed financial optimisation of the SM with respect to the Levelised Cost of Electricity (LCOE) is beyond the scope of this work. Therefore the chosen SM is based on energy considerations. Figure 6 indicates that for a SM of 2.4 a large number of days in the TMY are able to supply more than six hours of storage. However, once the TES is fully charged any further excess energy cannot be stored. Figure 7 presents the total energy stored throughout the TMY for six hours of storage, compared to the total energy available for storage. The results reveal that the level of wasted energy is significantly higher for the SM of 2.4 than the solar multiple of 2.1. Therefore a SM of 2.1 was chosen for this work. For comparison Amsbeck et al. (2010) chose a solar multiple of 2.7 for a nine hour storage system for the T100 gas turbine at a plant location that receives an annual DNI of 2015 kWh/m^2/yr (Amsbeck et al., 2008). The total sum of the DNI profiles taken from the TMY for the current plant location amounts to 1782 kWh/m^2/yr. The use of an annual heliostat field efficiency in Eq.(23) is an approximation, as the actual heliostat efficiency will vary according to the relative position of the sun to the heliostat field. A more detailed plant model would take this into account. As the aim of this study is to investigate the storage component of the design thus this approximation is acceptable. 1.5 1.8 2.1 2.4 Annual Thermal Energy [MWh th] Solar Multiple Available Excess Energy Stored Energy Student Version of MATLAB Figure 7: Analysis of the of annual energy available for storage as a function of SM, excluding partial discharging 7.2. Design day Figure 8 shows the DNI profile taken from the TMY for typical clear summer and winter days. The summer day has a higher peak and wider spread. across the day. However, an analysis of the TMY showed that Pretoria experiences a significantly higher number of clear days in winter than summer, due to cloud cover. Thus the clear winter day DNI profile was chosen for the design study. This day is able to fill the storage to maximum capacity. Hour DNI [W/m2 ] Turbine DNI threshold (SM=2.1) Transient cloud cover Clear summer day Clear winter day Student Version of MATLAB Figure 8: DNI profile for clear summer and winter days 7.3. Utilisation factor Unlike latent heat TES, sensible heat TES is non-isothermal. Therefore these systems must be over-sized, in order to account for the storage material that does not undergo the full temperature change between the charge and discharge temperatures. The efficient use of the storage material is vital to cost reductions, as over-sizing of the system increases the storage mass and volume. This is particularly important in a pressurised environment where a premium is placed upon the storage volume. The concept of a utilisation factor is introduced to benchmark the amount of thermal energy that is recovered from each storage configuration, relative to the total theoretical maximum that can be stored: F= Rt[d] tc m˙[d]c[f](T[d](t)−873) dt mscs∆Trec where the discharge temperature is defined as: Td(t) = 2πρf ˙ m[d] Z R Uz(r)Tf(0, r)rdr (25) There are three factors that affect the utilisation factor: 1) the maximum allowable temperature that the packed bed base (z = L) reaches during the charging cycle; 2) the maximum allowable decrease in discharging temperature; 3) heat losses through the storage wall. In an ideal storage system the base of the packed bed would reach 950℃ during the charging cycle, thus maximising the amount of thermal energy stored. However, the hot air exiting the packed bed is in direct contact with the mesh support grid for the ceramic particles and the blower. In order to avoid the use of costly nickel based super-alloys, these components should not exceed 700℃ (Gl¨uck et al., 1991). Figure 9 shows the effect of the maximum base temperature onF, for a packed bed configuration ofα= 3 andd[p]= 10,50 mm. Raising the allowable base temperature from 650℃ to 750℃ increases F by a maximum of 6% and 15% for the 10 mm and 50 mm particles respectively. The effect is more pronounced for the larger particles due to the lower rate of convective heat transfer. During the discharge cycle the exit temperature of the fluid (discharging temperature) starts to decrease over time as the thermocline approaches the bed exit. Once the discharging temperature decreases to below the specified limit the storage is considered to to depleted and the discharging cycle is stopped. The allowable decrease in discharge temperature is dependent on the choice of operating strategy for the SGT. If the system is designed to operate on solar energy only (no hybridisation), the discharge temperature should remain close to 950℃ and not decrease below 850 ℃. At 850℃ the turbine power is already significantly reduced (52 kW[e] assuming 28.2% conversion efficiency). If hybridisation is included then the combustion chamber can be used to boost the turbine inlet temperature to 950℃. As shown in Fig. 9, the inclusion of hybridisation increases F for the TES, as a larger portion of energy can be extracted from the storage before it is considered depleted. 0 50 100 150 200 250 300 350 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 Allowable decrease in discharging temperature [^oC] T[base]=650 ^oC; d[p]=10mm T[base]=700 ^oC; d[p]=10mm T[base]=750 ^oC; d[p]=10mm T[base]=650 ^oC; d[p]=50mm T[base]=700 ^oC; d[p]=50mm T[base]=750 ^oC; d[p]=50mm Student Version of MATLAB Figure 9: Utilisation factor as a function of maximum base temperature and allowable decrease in discharging temperature 7.4. Thermocline shape The term thermocline refers to the shape of the axial temperature profile between the ‘hot’ (950℃) and ‘cold’ (600 ℃) zones in the packed bed. The gradient of the thermocline influences the utilisation factor. A steep thermocline gradient allows for a large portion of the packed bed mass to be heated to 950℃ before the base reaches the temperature limit. Figure 10 presents the shape of the thermocline for various packed bed configurations at the point when the charging cycle is complete. In the current research a conservative temperature limit of 650℃was imposed on the base. The gradient of the thermocline is dependent on the convective heat transfer between the fluid and solid particles. Decreasing the particle size increases the surface area for heat exchange, while increasing the aspect ratio increases the convective heat transfer coefficient. Improved convective heat transfer increases the storage efficiency and utilisation factor. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 600 Axial Length [z/L[z]] Radially Averaged Fluid Temperature [ o C] α=2; d[p]=10mm α=2; d p=50mm α=3; d p=10mm α=3; d[p]=50mm α=4; d[p]=10mm α=4; d p=50mm α=5; d p=10mm α=5; d[p]=50mm Student Version of MATLAB Figure 10: Charging thermocline shape whenTbase= 650℃(clear winter day) 7.5. Pressure drop and blower power requirement The increase in convective heat transfer in the packed bed must be balanced against the increase in pressure drop, as the gas turbine performance is sensitive to this parameter. One of the benefits of the pressurised system is the density related decrease in the volumetric flow rate. Figure 11 shows the effect of aspect ratio and particle diameter on the pressure drop during the discharging In discharging mode the pressure drop is shown to be below 4.5 kPa for all bed configurations that were simulated. Under worst case scenario ofα= 5 and dp= 10 mm, the pressure drop is close to that of the benchmark receiver (4 kPa). Thus the pressure drop during discharging is acceptable for all configurations, under discharging conditions. In charging mode the blower must circulate the air between the packed bed and the receiver. The electrical power requirement for the blower is calculated using: E˙[blower]= 1 ηblower ˙ m[c] ∆P[bed]+ ∆P[rec] (26) The pressure drop across the packed bed was calculated using Eq.(5). Due to the lack of a detailed receiver design an initial estimate of the receiver pressure drop was made based on measurements taken during testing of the SOLGATE receiver. As described by Spelling (2013), the receiver pressure drop can be estimated by scaling the conditions relative to the SOLGATE receiver. In future work, once detailed receiver designs are conducted the receiver pressure drop can be more accurately modelled. The pressure drop is estimated by: ∆P[refos] = Grec (27) where ∆Prefos = 40 mBar, Grefos = 1.063 kg/sm^2, Prefos = 6.5 bar, Trefos = 700℃. Aspect Ratio [D/L Pressure Drop [kPa] dp=10mm dp=16mm dp=25mm d[p]=50mm Student Version of MATLAB Figure 11: Pressure drop across the packed bed, calculated using Eq.(5) and ˙mf = 0.64 kg/s Figure 12 shows that most of the blower power is used to circulate the air through the receiver. For α = 5 and dp = 10 mm the blower power reaches 7.3 kWe, which is 10% of the electrical power produced by the turbine. The sudden decrease in power requirements, shown in Fig. 12, occurs when the stor- age charging is stopped as the base temperature has reached its limit. Even with very low pressure drop across the packed bed, the blower power requirements reach 5 kWe due to the receiver pressure drop. This could be overcome by in- cluding a second receiver to use for charging the storage. This would avoid the increased mass flow and associated pressure drop across the primary receiver. Hour Blower Power [kW e] α=1; d[p]=10mm (Total) α=1; d[p]=50mm (Total) α=5; d[p]=10mm (Total) α=5; d[p]=50mm (Total) α=1; d[p]=10mm (TES) α=1; d[p]=50mm (TES) α=5; d[p]=10mm (TES) α=5; d[p]=50mm (TES) Student Version of MATLAB Figure 12: Blower power requirements during the charging cycle on a clear winters day 7.6. Storage efficiency The efficiency of each storage configuration is calculated using Eq.(28). The numerator represents the total thermal energy extracted from the TES during discharging. The denominator represents the total thermal energy supplied to the packed bed as well as the energy supplied to operate the blower. The blower electrical energy is converted into thermal energy using the efficiency of the gas turbine (ηturb= 0.282 at specified conditions). ξ= Qth,extracted (28) where: Qth,extracted= Z t[d] ˙ m[d]c[f] dt (29) Qth,supplied= Z t[c] ˙ mccf dt (30) Q∆P,supplied= 1 ηturb Z tc E˙[blower](t) dt (31) 8. Results The parametric design study includes the operation of the storage with and without hybridisation. During the discharging cycle the storage is con- sidered to be depleted when the discharge temperature drops below 620℃for TES+hybridisation; and below 850℃for TES only. The results for the storage efficiencies and utilisation factors are presented in Figs. 13 and 14 respectively. The advantages of combining TES with fossil fuel hybridisation are clear. The boosting of the discharge temperature allows for the more energy to be extracted from the storage before it is considered depleted, resulting in an increase in stor- age efficiency and utilisation factor. As shown in Fig. 13, the storage efficiency decreases with increasing aspect ratio for TES+hybridisation. This is caused by the increase in pressure drop and heat losses through the wall. The particle diameter does not have large influence on the efficiency for this case. The largest 50 mm particles have efficiencies between 0.4% and 1.4% lower than the 10 mm particle diameters for various aspect ratios. Increasing the aspect ratio from 1 to 5 decreases the efficiency by between 4.1% and 5%, depending on the particle diameter. For the case of TES only, the decrease in convective heat transfer at low aspect ratios, results in lower storage efficiencies. The choice of particle diameter has a more pronounced effect on storage efficiency for the TES only case. Using the 10 mm particles improves the efficiency, by up to 27%, when compared to the 50 mm particles. Both test cases exhibit small improvements in utilisation factor with increas- ing aspect ratio. The choice of particle diameter plays a more important role. The 10 mm particles maximise the utilisation factor for both TES+hybridisation 1 2 3 4 5 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 TES+hybrid. d[p]=10mm TES+hybrid. d[p]=16mm TES+hybrid. d p=25mm TES+hybrid. d[p]=50mm TES only d[p]=10mm TES only d[p]=16mm TES only d p=25mm TES only d[p]=50mm Student Version of MATLAB Figure 13: Storage efficiencies from design study 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 TES+hybrid. d[p]=10mm TES+hybrid. d[p]=16mm TES+hybrid. d p=25mm TES+hybrid. d[p]=50mm TES only d[p]=10mm TES only d[p]=16mm TES only d p=25mm TES only d[p]=50mm Figure 14: Utilisation factors from design study and TES only cases. Table 2 provides insight into the energy stored and dis- charged by each storage configuration. In order to achieve a nominal six hours of storage, the TES system should be able to store a total of 1553 kWhth. At a SM of 2.1 the design day that was chosen can supply a total of 1782 kWh[th] of excess thermal energy. As shown in Table 2 only the storage configurations 1, 5, 6, 9, 10, 13, 14, 15, 17, 18, 19 can store more than 1553 kWh[th] before the base tem- perature reaches its limit. In terms of energy extracted only configurations 5, 9, 13, 17 can supply more than 1553 kWhth for the case of TES+hybridisation. These configurations also have the highest blower requirements of 6-7.3 kWe. Configurations 10, 14, 18 can supply greater than 97% of the targeted energy value for TES+hybridisation. For the TES only case, configuration 17 is able to discharge 93% of the targeted energy value. A second design iteration can be performed with an increase in storage volume to achieve 100% of the specified discharge 9. Conclusions A high temperature thermal storage system was proposed in order to increase the solar share of a SGT for off the grid power generation. A detailed model of the system was developed and solved using a finite element approach. The model was successfully validated against experimental data from a purpose built test facility. Alumina was identified as an effective high temperature storage material due to its low cost and high volumetric energy storage density. The validated TES model was used to conduct a parametric design study to determine the optimal storage configuration for a 7 m^3packed bed. The results showed that the inclusion of hybridisation is important in order to increase the storage efficiency and utilisation factor. For each analysed storage configuration the level of stored energy increased with increasing the aspect ratio and decreasing the particle diameter. For the case of TES+hybridisation the configuration of α= 2 and d[p]= 10 mm is preferable, yielding 1559 kWh[th]of energy discharged at a storage efficiency of 88% and utilisation factor of 85%. If hybridisation is not allowed for the preferable configuration isα= 4 anddp= 10 mm, yielding 1435 kWhth energy discharged at a storage efficiency of 78% and utilisation factor of 78%. Table 2: Results of parametric design study for nominal six hour TES Config. α d[p] Q[supply] Q[extract]^1 Q[extract]^2 E[blower]/η[turb] Num. [mm] [kWh[th]] [kWh[th]] [kWh[th]] [kWh[th]] 1 1 10 1588 1517 (0.98) 1291 (0.83) 104 2 16 1510 1442 (0.93) 1159 (0.75) 98 3 25 1411 1343 (0.86) 991 (0.64) 92 4 50 1182 1113 (0.71) 626 (0.40) 77 5 2 10 1656 1559 (1.00) 1381 (0.89) 114 6 16 1588 1492 (0.96) 1265 (0.81) 106 7 25 1495 1399 (0.9) 1106 (0.71) 99 8 50 1282 1184 (0.76) 769 (0.50) 84 9 3 10 1684 1572 (1.01) 1412 (0.91) 124 10 16 1620 1507 (0.97) 1298 (0.84) 112 11 25 1530 1417 (0.91) 1152 (0.74) 103 12 50 1329 1214 (0.78) 832 (0.54) 88 13 4 10 1705 1581 (1.02) 1435 (0.92) 137 14 16 1638 1513 (0.97) 1320 (0.85) 120 15 25 1556 1430 (0.92) 1179 (0.76) 108 16 50 1360 1232 (0.79) 868 (0.56) 92 17 5 10 1716 1582 (1.02) 1439 (0.93) 153 18 16 1651 1516 (0.98) 1331 (0.86) 128 19 25 1570 1434 (0.92) 1191 (0.77) 114 20 50 1389 1251 (0.81) 902 (0.58) 95 1 TES+hybridisation case 2 TES only case bracketed values indicate fraction of energy discharged relative to 1553 kWhth 10. Acknowledgements The authors would like to thank the Council for Scientific and Industrial Research (CSIR) for funding to perform this research, and the South African National Energy Development Institute (SANEDI) for equipment funding. A area . . . .m^2 ap particle surface area to volume ratio . . . m^−1 c specific heat capacity at con- stant pressure . . . J/kgK CA inertial Ergun coefficient . . . . . C[B] viscous Ergun coefficient . . . . . Cd dispersion coefficient . . . . dp particle diameter . . . m DNI direct normal irradianceW/m^2 E˙ electrical power . . . J/s f1 Brinkman equation viscous co- efficient . . . kg/m^2s f2 Brinkman equation inertial co- efficient . . . kgs^2/m^3 F utilisation factor . . . . G mass flux . . . kg/sm^2 H Hermite polynomials . . . . hfs modified heat transfer coeffi- cient . . . W/m^2K hp inter-phase heat transfer coef- ficient . . . W/m^2K k thermal conductivity . . W/mK keff effective thermal conductivity L[z] packed bed length . . . m m mass . . . kg m mass flow rate . . . kg/s N number . . . . P pressure . . . Pa ∆P pressure drop . . . Pa Q thermal energy . . . J Qvol volumetric energy storage den- sity . . . J/m^3 Q˙ thermal power . . . J/s q heat flux . . . W/m^2 R packed bed radius . . . m r radial coordinate . . . m T temperature . . . K t time . . . s u local element radial coordinate Uz axial superficial fluid velocity v local element axial coordinate z axial coordinate . . . m Greek Symbols α packed bed aspect ratio (Lz/D). . . . void fraction . . . . [∞] bed centre void fraction . . . . η efficiency . . . . µ dynamic viscosity . . . Pa s µeff effective fluid viscosity . . . Pa s φ unknown spline coefficients . . . ρ density . . . kg/m^3 ξ storage efficiency . . . . Superscripts ab finite element location n time step . . . . a radial element boundary b axial element boundary bed solar receiver . . . . c charging . . . . d discharging . . . . f fluid phase . . . . hel heliostat . . . . rec solar receiver . . . . ref REFOS receiver . . . . s solid phase . . . . turb gas turbine . . . . w wall . . . . Non-Dimensional Variables Bip=^h[2k]^fs^d^p s Particle Biot number . . . . Nup= ^h^fs[k]^d^p f Particle Nusselt number . Pr = ^c^f[k]^µ^f f Prandtl number . . . . Rep=^ρ^f^U[µ]^z^d^p f Particle Reynolds num- ber . . . . Adebiyi, G. A., Nsofor, E. C., Steele, W. G., Jalalzadeh-Azar, A. A., 1998. Para- metric study on the operating efficiencies of a packed bed for high-temperature sensible heat storage. ASME Journal of Solar Energy Engineering 120 (1), 2– Amsbeck, L., Buck, R., Heller, P., Jedamski, J., Uhlig, R., 2008. Development of a tube receiver for a solar-hybrid microturbine system. In: Proceedings of 14th SolarPACES Conference, Las Vegas. pp. 4–7. Amsbeck, L., Denk, T., Ebert, M., Gertig, C., Heller, P., Herrmann, P., ... & Uhlig, R., 2010. Test of a solar-hybrid microturbine system and evaluation of storage deployment. In: 16th SolarPACES symposium, Perpignan, France. Aora, n.d. Http://aora-solar.com/active-sites, Acessed 31/10/2014. Avila-Marin, A. L., Alvarez-Lara, M., Fernandez-Reche, J., 2014. A regenerative heat storage system for central receiver technology working with atmospheric air. Energy Procedia 49, 705–714. Beasley, D. E., Clark, J. A., 1984. Transient response of a packed bed for thermal energy storage. International Journal of Heat and Mass Transfer 27 (9), 1659– Bindra, H., Bueno, P., Morris, J. F., Shinnar, R., 2013. Thermal analysis and exergy evaluation of packed bed thermal storage systems. Applied Thermal Engineering 52 (2), 255–263. Bradshaw, A. V., Johnson, A., McLachlan, N. H., Chiu, Y. T., 1970. Heat transfer between air and nitrogen and packed beds of non-reacting solids. Transactions of the Institute of Chemical Engineers 48, T77–T84. Bradshaw, R. W., Siegel, N. P., 2008. Molten nitrate salt development for ther- mal energy storage in parabolic trough solar power systems. In: ASME 2008 2nd International Conference on Energy Sustainability collocated with the Heat Transfer, Fluids Engineering, and 3rd Energy Nanotechnology Confer- ences. American Society of Mechanical Engineers, pp. 631–637. Du Toit, C. G., Rousseau, P. G., Greyvenstein, G. P., Landman, W. A., 2006. A systems CFD model of a packed bed high temperature gas cooled nuclear reactor. International Journal of Thermal Sciences 45 (10), 70–85. Finlayson, B. A., 1980. Orthogonal collocation on finite elements - progress and potential. Mathematics and Computers in Simulation 22 (1), 11–17. Giese, M., Rottsch¨afer, K., Vortmeyer, D., 1998. Measured and modeled su- perficial flow profiles in packed beds with liquid flow. American Institute of Chemical Engineers Journal 44 (2), 484–490. Gl¨uck, A., Tamme, R., Kalfa, H., Streuber, C., 1991. Investigation of high temperature storage materials in a technical scale test facility. Solar Energy Materials 24 (1), 240–248. Gunn, D. J., 1978. Transfer of heat or mass to particles in fixed and fluidised beds. International Journal of Heat and Mass Transfer 21 (4), 467–476. Hunt, M. L., Tien, C. L., 1990. Non-Darcian flow, heat and mass transfer in catalytic packed-bed reactors. Chemical Engineering Science 45 (1), 55–63. IAEA, 2001. Heat transport and afterheat removal for gas-cooled reactors under accident conditions-technical report IAEA-TECDOC-1163. Ismail, K. A., Stuginsky, R., 1999. A parametric study on possible fixed bed models for pcm and sensible heat storage. Applied Thermal Engineering 19 (7), 757–788. Jalalzadeh-Azar, A. A., Steele, W. G., Adebiyi, G. A., 1996. Heat transfer in a high-temperature packed bed thermal energy storage system - roles of radi- ation and intraparticle conduction. Journal of Energy Resources Technology 118 (1), 50–57. Jalalzadeh-Azar, A. A., Steele, W. G., Adebiyi, G. A., 1997. Performance com- parison of high-temperature packed bed operation with PCM and sensible heat pellets. International Journal of Energy Research 21 (11), 1039–1052. Jeffreson, C. P., 1972. Prediction of breakthrough curves in packed beds: 1. applicability of single parameter models. American Institute of Chemical En- gineers Journal 18 (2), 409–416. Klein, P., Roos, T., Sheer, T. J., 2014a. High temperature thermal storage for solar gas turbines using encapsulated phase change materials. In: The 2nd Southern African Solar Energy Conference Klein, P., Roos, T., Sheer, T. J., 2014b. Experimental investigation into a packed bed thermal storage solution for solar gas turbine systems. Energy Procedia, 49, 840-849. 49, 840–849. Klinkenberg, A., 1948. Numerical evaluation of equations describing transient heat and mass transfer in packed solids. Industrial & Engineering Chemistry 40 (10), 1992–1994. Macdonald, I. F., El-Sayed, M. S., Mow, K., Dullien, F. A., 1979. Flow through porous media-the ergun equation revisited. Industrial & Engineering Chem- istry Fundamentals 18 (3), 199–208. Modi, A., P´erez-Segarra, C. D., 2014. Thermocline thermal storage systems for concentrated solar power plants: One-dimensional numerical model and comparative analysis. Solar Energy 100, 84–93. Mongibello, L., Atrigna, M., Graditi, G., 2013. Parametric analysis of a high temperature sensible heat storage system by numerical simulations. Journal of Solar Energy Engineering 135 (4), 041010. Niessen, H. F., St¨ocker, B., 1997. Data sets of the sana experiment 1994-1996. Tech. rep., JUEL-3409. Forschungszentrum J¨ulich GmbH. Pacheco, J. E., Showalter, S. K .and Kolb, W. J., 2002. Development of a molten-salt thermocline thermal storage system for parabolic trough plants. Journal of Solar Energy Engineering 124 (2), 153–159. Roos, T. H., Rubin, N., Maliage, M., Klein, P., Dunn, D., Perumal, S., 2015. DPSS IRIP Progress Report for 2010/2011 and 2011/2012. CSIR Project Report, DPSS ASC2011/046.
{"url":"https://pubpdf.net/za/docs/parametric-analysis-temperature-packed-thermal-storage-design-turbine.10332550","timestamp":"2024-11-08T10:47:52Z","content_type":"text/html","content_length":"221646","record_id":"<urn:uuid:3199622a-f949-45c8-85d4-80b84832aad7>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00622.warc.gz"}
Episode 7 – The equations for the Model of Complete and Incomplete Coordinate Systems On September - 30 - 2007 In Episode 7, we explore the equations behind the model of Complete and Incomplete Coordinate Systems. First, we revisit the definitions of Complete and Incomplete Coordinate Systems. Then the equations will be presented and derived graphically. In addition to understanding the equations, it will reveal the meaning of the sub-expression vx’/(c^2-v^2) that is given in Einstein’s time (Tau) equation. Please download the accompanying PDF file associated with this episode. Presentation in PDF Format
{"url":"http://www.relativitychallenge.com/archives/date/2007/09","timestamp":"2024-11-14T05:33:50Z","content_type":"application/xhtml+xml","content_length":"41603","record_id":"<urn:uuid:c9f927b2-368d-491b-b11b-84bff5be3570>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00738.warc.gz"}
How to find if variable is divisible by 2 with JavaScript? - The Web Dev How to find if variable is divisible by 2 with JavaScript? Sometimes, we want to find if variable is divisible by 2 with JavaScript. In this article, we’ll look at how to find if variable is divisible by 2 with JavaScript. How to find if variable is divisible by 2 with JavaScript? To find if variable is divisible by 2 with JavaScript, we can use the modulo operator. For instance, we write const divisibleBy2 = variable % 2 === 0; to check if variable is divisible by 2 with variable % 2 === 0 If variable is divisible by 2, then we get 0 as the remainder if we divide variable by 2. % returns the remainder when we divide the left operand by the right one. To find if variable is divisible by 2 with JavaScript, we can use the modulo operator.
{"url":"https://thewebdev.info/2022/04/26/how-to-find-if-variable-is-divisible-by-2-with-javascript/","timestamp":"2024-11-14T07:43:39Z","content_type":"text/html","content_length":"114511","record_id":"<urn:uuid:d5eae0b4-73f2-4c21-98b2-91948d11706b>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00705.warc.gz"}
All About Numpy Logical _and in Python The Numpy library in python consists of a large collection of high-level mathematical functions. These functions are used for handling large, multi-dimensional arrays and matrices in python and for performing various logical _and statistical operations on them. With numpy, we can perform mathematical computations at high speed in python. In this article, we shall be learning about one of the logical functions present in numpy – the numpy logical _and function. What are logical operations? In mathematics, logical operations are performed to obtain the relation between two entities in the form of a boolean value. It will test whether the logical relationship between two entities is True or False. In python programming, logical operations can be performed between two lists, variables, or arrays. The numpy module contains four main logical operations and logical _and is one of them. 1. logical_and 2. logical_or 3. logical_not 4. logical_xor Numpy logical _and The numpy logical _and is a function to perform the logical AND operation in python. With this function, we can find the truth value for the AND operation between two variables or element-wise computation for two lists or arrays. The bitwise & operator can be used in place of the logical _and function when we are working with boolean values. Syntax of numpy logical _and function The syntax of numpy logical _and function is as follows: numpy.logical_and(x1, x2, /, out=None, *, where=True, casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj]) = <ufunc 'logical_and'> x1, x2: These are the input arrays between whose elements the logical_and operation has to be performed. The shape of both arrays should be equal. out: It is an n-dimensional array which is an optional parameter. The default value of ‘out’ is ‘None’. This parameter specifies the location where the output has to be stored. The length should be equal to x1 and x2. If not mentioned, by default, it will return a new array with the result. where: It is an optional parameter that is array_like. The ‘where’ parameter mentions a condition. Return Value: y: The function returns an n-dimensional array consisting of boolean values. These boolean values have been obtained by applying AND operation between the elements from arrays x1 and x2. The truth table of logical AND operation The basic truth table for AND operation between two boolean values is: x1 x2 AND Operation TRUE TRUE TRUE Examples of numpy logical _and Let us first understand how the numpy logical _and function works by taking few examples. First, we will import the numpy library. Then, we shall see the output of logical AND operation between different combinations of boolean values. The logical _and function will assign output as ‘True’ only if both the values are True. When x1 = False and x2 = False For two ‘False’ boolean values, the output of the logical _and function will also be false. import numpy as np np.logical_and(False, False) When x1 = False and x2 = True import numpy as np np.logical_and(False, True) When x1 = True and x2 = False import numpy as np np.logical_and(True, False) When x1 = True and x2 = True import numpy as np np.logical_and(True, True) Logical _and between two arrays We can apply the logical _and function between two arrays. The arrays can be boolean arrays, number arrays, or a combination of both. Let us take two boolean arrays: import numpy as np x1 = [False, True, False] x2 = [True, False, False] The output is : [False False False] Let us take two arrays containing both – numerical values as well as boolean values. For numerical values, all non zero values will be True and zero will be False. import numpy as np x1 = [False,7,0,4,5] x2 = [9,True,False,8,1] [False True False True True] Logical _and on conditions We can also write conditions instead of an array of values. Let us consider an array ‘x’, which we will obtain from the arange() function present in them. Then, we will use logical _and to print only those numbers which lie between 1 and 10. import numpy as np x = np.arange(0,20,5) The output is: [ 0 5 10 15] [False True False False] This is because 5 is the only number from x which outputs True for both the given conditions. Using Numpy logical _and for more than two arrays Although the syntax of the logical _and function only allows comparing two arrays, we can use it to compare more than two arrays simultaneously. Because of the commutative property of the AND operator, this is possible. We shall use two logical _and functions. Here, we will pass the logical _and function as the first argument to an outside logical _and function. The code to achieve that is given below: import numpy as np x1 = [8,9,0,1,4,6] x2 = [0,5,2,6,7,4] x3 = [1,2,1,5,9,8] The output is: [False True False True True True] Numpy logical _and along axis using reduce We can also calculate the logical _and value between two arrays by mentioning the axis parameter using the reduce() function. For example, let us take a two-dimensional array and calculate its value with axis=0 and axis=1. For axis = 0: This will perform the logical _and operation column-wise among the column elements. import numpy as np x1 = np.array([[True, False],[True, True], [True, False]]) print(np.logical_and.reduce(x1, axis=0)) The output is: [True False] For axis = 1: This will perform the logical _and operation row-wise for each row. Since here we have three rows, the length of the output array will be 3. import numpy as np x1 = np.array([[True, False],[True, True], [True, False]]) print(np.logical_and.reduce(x1, axis=1)) [False True False] Also Read | Everything You Wanted to Know About Numpy Arctan2 FAQ’s on Numpy logical _and What is the difference between logical _and &the bitwise and operator? The logical _and operator will compare the values of two arrays and return the logical _and comparison in the form of a boolean array. At the same time, the bitwise and operand will perform the logical _and operation on the bits. Let us take an example. print(9 & 5) #1 print(np.logical_and(9,5)) #2 Both the above statements will give different outputs. In the case of statement 1, where we have the bitwise & operator, it will apply AND on the bits of the numbers 9 and 5. In the case of statement 2, it will apply the logical _and operation and compare the boolean values of the two numbers. Thus, the output will also be in boolean form. Why does Valueerror occur while using numpy logical _and? Valueerror may occur while using logical _and if the size of the two arrays – x1 and x2 is not the same. Pass arrays of equal dimensions, and the array shall be resolved. That is all, folks! If you have any questions in mind, leave them below in the comments. Until next time, Keep Learning! 0 Comments Inline Feedbacks View all comments
{"url":"https://www.pythonpool.com/numpy-logical-_and/","timestamp":"2024-11-05T10:45:22Z","content_type":"text/html","content_length":"151401","record_id":"<urn:uuid:8aa43f3d-1c3c-4d26-a0c8-ebfae6b96c4e>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00464.warc.gz"}
Interval-based Techniques Next: More General Problems Up: Ad-Hoc Techniques Previous: Randomized Strategies Exploration is often more efficient when it is based on second-order information about the certainty or variance of the estimated values of actions. Kaelbling's interval estimation algorithm [52] stores statistics for each action n). Other payoff distributions can be handled using their associated statistics or with nonparametric methods. The method works very well in empirical trials. It is also related to a certain class of statistical techniques known as experiment design methods [17], which are used for comparing multiple treatments (for example, fertilizers or drugs) to determine which treatment (if any) is best in as small a set of experiments as possible. Leslie Pack Kaelbling Wed May 1 13:19:13 EDT 1996
{"url":"http://www.cs.cmu.edu/afs/cs/project/jair/pub/volume4/kaelbling96a-html/node14.html","timestamp":"2024-11-13T18:28:48Z","content_type":"text/html","content_length":"3493","record_id":"<urn:uuid:458d40b6-8219-40f1-8f98-153b3e27437b>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00888.warc.gz"}
Light Your Haunt With Hacked LED Christmas Lights Posted on December 17th, 2014 by headspook Shove a bunch of LEDs into a can, then watch Ohm’s Law and Murphy’s Law duke it out. Safely lighting our outdoor haunt has always been a compromise game. Every spotlight comes with an extension cord, and those routes have to be planned because haunt visitors are like free range chickens, or BBs. They run all over. Keeping the electrified snakes from attacking our chickens is a key responsibility that we don’t take lightly. In an effort to reduce the snake population, we investigated other lighting options and came up with what seemed to be a viable alternative: Low voltage landscape lighting. Specifically, we built an assortment of LED spots, powered by a 12V 350W transformer over 12AWG low voltage power cable. Because the cable carries 12V DC current, it can be safely “tucked” an inch or so beneath the soil with just a spade. Each lamp consists of a cluster of LED Christmas lights housed in a length of two-inch diameter PVC pipe, which is sealed and mounted to a post. The positive and negative leads are attached with wire nuts to a low voltage wire connector. The best part of this arrangement is that the connector can be attached to the power bus at any point along its length, allowing for greater flexibility in the layout of our lighting plan. With fewer extension cords lying about, there are fewer opportunities for “unplanned interactions” between them and our guests. All we had to do was keep our chickens from tripping over the lamps. The basic idea is to pack as many LEDs as possible into the can without setting any fires. The not setting any fires part requires that we understand a few key concepts. For example, why would I wire three or four LEDs in series, and then connect three or four of these series circuits together in parallel? If you know the answer, then you can go outside and play while the rest of the class catches up. It is usually at this point in a dissertation on LEDs that the author devotes several paragraphs to a review of Ohm’s Law. Instead of heading off into those weeds, we’re going to wade through some other weeds. We’ll tackle the math as we go. How do LEDs work? In the most general terms, a circuit is designed to operate at a particular voltage, and will draw as much current as it needs. An LED requires a certain minimal current to turn on. The “forward voltage” is the least amount of voltage required to allow current to flow through the LED. The amount of current changes (exponentially) based on the amount of voltage that is applied. A small increase in voltage results in a large increase in current. The more current, the brighter the light. That is, until it overheats and dies. There are two species of LED Christmas lights. One type consists of red, orange, yellow, green, and blue LEDs. The other type utilizes only a white LED encased in a colored plastic sheath or bulb. It’s important to know the difference because the latter type provides a simpler solution for our application. All the LEDs have the same power requirement: About 2V(forward), and about 3V(optimal), drawing about 20mA. For simplicity’s sake we’ll be discussing this type of LED. Note: I had a zillion of the colored LEDs and no data sheet, which meant I had to employ my unpaid assistants, Trial and Error, to determine the power requirements for each color. I should mention that Trial was generally cautious during testing while Error ham-handedly blew through not a few LEDs. Think of voltage as a sluice gate above a water wheel. Say the gate is marked with stops, 1-12, that represent voltage. Open the gate to 1 and a trickle of water flows over the wheel, but it’s not enough to make it turn. Open the gate further to 2 and the wheel begins to turn. We’re at forward voltage, meaning the LED is on, but just barely. We need to apply more voltage (to get more current to flow) for it to glow more brightly. Open the gate to 3 and now there’s enough power to grind some corn. We’re at optimal voltage and the LED glows at the brightness for which it was designed. If you go all the way to 12, then you, or the passive-aggressive engineer who built the gate, must have a grudge against the miller. Like the wheel about to fly off its axle, when you over-energize an LED, it may glow very brightly for a short period of time, but it’s toast. Wiring LEDs in series An LED is polarized, meaning it has a positive lead(anode) and a negative lead(cathode). To wire two LEDs in series, connect the positive side of one to the negative side of the other. Note: Always connect the anode to the positive side of the voltage source, and the cathode to the negative side of the voltage source. How many LEDs can safely be connected in series? Simply add up the voltage requirement for each LED until you reach source voltage (12V in this case). Assuming 3V(20mA) for each LED, the answer is four (3V+3V+3V+3V=12V). This series circuit can handle 12V, and would draw 20mA. I can’t get my LEDs to add up to exactly 12V Three goes into 12 four times, which is nice. But suppose your LEDs can’t handle 3V, and instead prefer 2.5V. In that case, how do you get to an even number of LEDs? • Option I – Intentionally overload the circuit You can place 5 LEDs in series. Each LED would get 2.4V (12V / 5 = 2.4V), instead of 2.5V, which means they’ll be a little dimmer. But it might not be enough of a difference for you to notice. Then again, they might be a lot dimmer, in which case you’re on to your second option. • Option II – Use a resistor to drop excess voltage Place four LEDs in series. Since these four LEDs would only require 10V (2.5V * 4 = 10V), you need to drop the extra two volts with a resistor. The resistor value can be calculated with Ohm’s Law as R = V / I. If each LED requires 20mA (.02A), then the resistor value would be 100Ω. (2V / .02A = 100Ω) So, you would add a 100Ω resistor at one end of your series circuit to deal with that extra two volts. • Option III – Overdrive the circuit Don’t. You saw what happened to the miller. Wiring LEDs in parallel Four LED Christmas lights don’t emit much light, and we’ve determined that we can’t add more LEDs to our series circuit, so how do we pack more light into our lamp? Simple. Build four series circuits, then wire those together in parallel. At the end of each series circuit is a lead. One is positive, the other is negative. Connect the positive leads, connect the negative leads, and you now have a cluster. Each series circuit draws 20mA, so the cluster would draw a total of 80mA. How many series circuits can be wired together in parallel to form a cluster? That depends on how much current your power supply can provide, and how good you are at packing everything together inside its container. Our 12V transformer is rated at 350W. To figure the total amount of current I can draw from the transformer, [I(Amps) = P(Watts) / V(Volts)] Therefore, 350W/12V = 29A, or Our transformer could, theoretically, provide current for roughly 362 clusters (80mA * 362 = 28,960mA). That’s assuming, of course, that our power bus is a superconductor, or a spherical chicken in a vacuum. It’s neither, so voltage losses in the power cable would probably be a limiting factor. Still, you could pile on a lot of lamps without worrying about the transformer bursting into Coming up… How to build a 12V LED lamp. I know what you’re thinking. We’re two articles into this project (if you count the introduction), and we haven’t yet glued anything to the workbench (silicone sealant is surprisingly adhesive) or accidentally hurled a hunk of PVC across the room (off the table saw, and I’m extremely lucky to still have two ears). I figured it was a good idea to cover the more arcane aspects of the project first. With these out of the way, the rest of the construction is a snap. Comments Off on Light Your Haunt With Hacked LED Christmas Lights What you say?
{"url":"https://spookyblue.com/spookyblog/light-your-haunt-with-hacked-led-christmas-lights","timestamp":"2024-11-11T16:31:29Z","content_type":"application/xhtml+xml","content_length":"54568","record_id":"<urn:uuid:a7c291e0-2d90-4e10-bbaa-3b7deb161236>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00475.warc.gz"}
musl - Re: [PATCH v2] MT fork [<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list] Message-Id: <C6YXH9CO8YXT.1ASOTS17EZ872@mussels> Date: Mon, 09 Nov 2020 15:01:24 -0300 From: Érico Nogueira <ericonr@...root.org> To: <musl@...ts.openwall.com>, <musl@...ts.openwall.com> Subject: Re: [PATCH v2] MT fork On Mon Nov 9, 2020 at 9:07 AM -03, Rich Felker wrote: > On Sun, Nov 08, 2020 at 05:12:15PM +0100, Szabolcs Nagy wrote: > > * Rich Felker <dalias@...c.org> [2020-11-05 22:36:17 -0500]: > > > On Fri, Oct 30, 2020 at 11:31:17PM -0400, Rich Felker wrote: > > > > One thing I know is potentially problematic is interaction with malloc > > > > replacement -- locking of any of the subsystems locked at fork time > > > > necessarily takes place after application atfork handlers, so if the > > > > malloc replacement registers atfork handlers (as many do), it could > > > > deadlock. I'm exploring whether malloc use in these systems can be > > > > eliminated. A few are almost-surely better just using direct mmap > > > > anyway, but for some it's borderline. I'll have a better idea sometime > > > > in the next few days. > > > > > > OK, here's a summary of the affected locks (where there's a lock order > > > conflict between them and application-replaced malloc): > > > > if malloc replacements take internal locks in atfork > > handlers then it's not just libc internal locks that > > can cause problems but locks taken by other atfork > > handlers that were registered before the malloc one. > No other locks taken there could impede forward process in libc. The > only reason malloc is special here is because we allowed the > application to redefine a family of functions used by libc. For > MT-fork with malloc inside libc, we do the malloc_atfork code last so > that the lock isn't held while other libc components' locks are being > taken. But we can't start taking libc-internal locks until the > application atfork handlers run, or the application could see a > deadlocked libc state (e.g. if something in the atfork handlers used > time functions, maybe to log time of fork, or gettext functions, maybe > to print a localized message, etc.). > > i don't think there is a clean solution to this. i > > think using mmap is ugly (uless there is some reason > > to prefer that other than the atfork issue). > When the size is exactly a 4k page, it's preferable in terms of memory > usage to use mmap instead of malloc, except on wacky archs with large > pages (which I don't think it makes sense to memory-usage-optimize > for; they're inherently and unfixably memory-inefficient). > But for the most part, yes, it's ugly and I don't want to make such a > change. > > > - atexit: uses calloc to allocate more handler slots of the builtin 32 > > > are exhausted. Could reasonably be changed to just mmap a whole page > > > of slots in this case. > Not sure on this. Since it's only used in the extremely rare case > where a huge number of atexit handlers are registered, it's probably > nicer to use mmap anyway -- it avoids linking a malloc that will > almost surely never be used by atfork. > > > - dlerror: the lock is just for a queue of buffers to be freed on > > > future calls, since they can't be freed at thread exit time because > > > the calling context (thread that's "already exited") is not valid to > > > call application code, and malloc might be replaced. one plausible > > > solution here is getting rid of the free queue hack (and thus the > > > lock) entirely and instead calling libc's malloc/free via dlsym > > > rather than using the potentially-replaced symbol. but this would > > > not work for static linking (same dlerror is used even though dlopen > > > always fails; in future it may work) so it's probably not a good > > > approach. mmap is really not a good option here because it's > > > excessive mem usage. It's probably possible to just repeatedly > > > unlock/relock around performing each free so that only one lock is > > > held at once. > I think the repeated unlock/relock works fine here, but it would also > work to just null out the list in the child (i.e. never free any > buffers that were queued to be freed in the parent before fork). Fork > of MT process is inherently leaky anyway (you can never free thread > data from the other parent threads). I'm not sure which approach is > nicer. The repeated unlock/relock is less logically invasive I think. > > > - gettext: bindtextdomain calls calloc while holding the lock on list > > > of bindings. It could drop the lock, allocate, retake it, recheck > > > for an existing binding, and free in that case, but this is > > > undesirable because it introduces a dependency on free in > > > static-linked programs. Otherwise all memory gettext allocates is > > > permanent. Because of this we could just mmap an area and bump > > > allocate it, but that's wasteful because most programs will only use > > > one tiny binding. We could also just leak on the rare possibility of > > > concurrent binding allocations; the number of such leaks is bounded > > > by nthreads*ndomains, and we could make it just nthreads by keeping > > > and reusing abandoned ones. > Thoughts here? > > > - sem_open: a one-time calloc of global semtab takes place with the > > > lock held. On 32-bit archs this table is exactly 4k; on 64-bit it's > > > 6k. So it seems very reasonable to just mmap instead of calloc. > This should almost surely just be changed to mmap. With mallocng, an > early malloc(4k) will take over 5k, and this table is permanent so the > excess will never be released. mmap avoids that entirely. > > > - timezone: The tz core allocates memory to remember the last-seen > > > value of TZ env var to react if it changes. Normally it's small, so > > > perhaps we could just use a small (e.g. 32 byte) static buffer and > > > replace it with a whole mmapped page if a value too large for that > > > is seen. > > > > > > Also, somehow I failed to find one of the important locks MT-fork > > > needs to be taking: locale_map.c has a lock for the records of mapped > > > locales. Allocation also takes place with it held, and for the same > > > reason as gettext it really shouldn't be changed to allocate > > > differently. It could possibly do the allocation without the lock held > > > though and leak it (or save it for reuse later if needed) when another > > > thread races to load the locale. > > > > yeah this sounds problematic. > > > > if malloc interposers want to do something around fork > > then libc may need to expose some better api than atfork. > Most of them use existing pthread_atfork if they're intended to be > usable with MT-fork. I don't think inventing a new musl-specific API > is a solution here. Their pthread_atfork approach already fully works > with glibc because glibc *doesn't* make anything but malloc work in > the MT-forked child. All the other subsystems mentioned here will > deadlock or blow up in the child with glibc, but only with 0.01% > probability since it's rare for them to be under hammering at fork > time. > One solution you might actually like: getting rid of > application-provided-malloc use inside libc. This could be achieved by > making malloc a thin wrapper for __libc_malloc or whatever, which > could be called by everything in libc that doesn't actually have a > contract to return "as-if-by-malloc" memory. Only a few functions like > getdelim would be left still calling malloc. This code block in glob() uses strdup(), which I'd assume would have to use the application provided malloc. Wouldn't that have to be worked around somehow? if (*pat) { char *p = strdup(pat); if (!p) return GLOB_NOSPACE; buf[0] = 0; size_t pos = 0; char *s = p; if ((flags & (GLOB_TILDE | GLOB_TILDE_CHECK)) && *p == '~') error = expand_tilde(&s, buf, &pos); if (!error) error = do_glob(buf, pos, 0, s, flags, errfunc, &tail); > The other pros of such an approach are stuff like making it so > application code doesn't get called as a callback from messy contexts > inside libc, e.g. with dynamic linker in inconsistent state. The major > con I see is that it precludes omitting the libc malloc entirely when > static linking, assuming you link any part of libc that uses malloc > internally. However, almost all such places only call malloc, not > free, so you'd just get the trivial bump allocator gratuitously > linked, rather than full mallocng or oldmalloc, except for dlerror > which shouldn't come up in static linked programs anyway. > Rich Powered by blists - more mailing lists Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.
{"url":"https://www.openwall.com/lists/musl/2020/11/09/2","timestamp":"2024-11-12T19:30:40Z","content_type":"text/html","content_length":"14237","record_id":"<urn:uuid:631447d0-c274-4e11-b4a1-c50b42f8a1c4>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00347.warc.gz"}
A Student's Guide to the Navier-Stokes Equations by Justin W. Garvin A Student's Guide to the Navier-Stokes Equations Justin W. Garvin Format: pdf, ePub, mobi, fb2 ISBN: 9781009236164 Publisher: Cambridge University Press Download Book ➡ Link Read Book Online ➡ Link Books downloaded from amazon A Student's Guide to the Navier-Stokes Equations by Justin W. Garvin DJVU The Navier-Stokes equations describe the motion of fluids and are an invaluable addition to the toolbox of every physicist, applied mathematician, and engineer. The equations arise from applying Newton's laws of motion to a moving fluid and are considered, when used in combination with mass and energy conservation rules, to be the fundamental governing equations of fluid motion. They are relevant across many disciplines, from astrophysics and oceanic sciences to aerospace engineering and materials science. This Student's Guide provides a clear and focused presentation of the derivation, significance and applications of the Navier-Stokes equations, along with the associated continuity and energy equations. Designed as a useful supplementary resource for undergraduate and graduate students, each chapter concludes with a selection of exercises intended to reinforce and extend important concepts. Video podcasts demonstrating the solutions in full are provided online, along with written solutions and other additional resources. A Student's Guide to Maxwell's Equations - Book Depository In this guide for students, each equation is the subject of an entire chapter, with detailed, A Student's Guide to the Navier-Stokes Equations. A Student's Guide to the Navier-Stokes Equations (Student's The Navier-Stokes equations describe the motion of fluids and are an invaluable addition to the toolbox of every physicist, applied mathematician, A Student's Guide to the Navier-Stokes Equations A Student's Guide to the Navier-Stokes Equations · Description · Product details · Other books in this series · Table of contents · About Justin W. Garvin. A Student's Guide to Infinite Series and Sequences Preface; 1. Infinite sequences; 2. Infinite series; 3. Power series; 4. Complex infinite series; 5. Series solutions for differential equations; 6. Fourier, A Student's Guide to Maxwell's Equations - Book Depository In this guide for students, each equation is the subject of an entire chapter, with detailed, A Student's Guide to the Navier-Stokes Equations. Search results in Student's Guides | Higher Education from A Student's Guide to the Navier-Stokes Equations · Justin W. Garvin , University of Iowa. Coming soon. Online ISBN: 9781009236119. A Student's Guide to Analytical Mechanics - Book Depository A Student's Guide to Maxwell's Equations · Daniel Fleisch. 12 Feb 2019. Paperback. US$31.99 A Student's Guide to the Navier-Stokes Equations. A Student's Guide to the Navier-Stokes Equations (Student's A clear and focused guide to the Navier-Stokes equations that govern fluid motion, including exercises and fully worked solutions. About the Author. Justin W. A Student's Guide to the Schroedinger Equation Retaining the popular approach used in Fleisch's other Student's Guides, this friendly resource uses A Student's Guide to the Navier-Stokes Equations. A Student's Guide to the Schroedinger Equation Retaining the popular approach used in Fleisch's other Student's Guides, this friendly resource uses A Student's Guide to the Navier-Stokes Equations.
{"url":"https://zenwriting.net/78d7eonav6","timestamp":"2024-11-13T06:33:32Z","content_type":"text/html","content_length":"14020","record_id":"<urn:uuid:976f3439-7110-4b66-8b70-08a614209148>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00784.warc.gz"}
Definition 5.1.5.1. Let $q: X \rightarrow S$ be a morphism of simplicial sets. We say that $q$ is a locally cartesian fibration if the following conditions are satisfied: The morphism $q$ is an inner fibration. For every edge $\overline{e}: s \rightarrow t$ of the simplicial set $S$ and every vertex $z \in X$ satisfying $q(z) = t$, there exists a locally $q$-cartesian edge $e: y \rightarrow z$ of $X$ satisfying $q( e) = \overline{e}$.
{"url":"https://kerodon.net/tag/01UW","timestamp":"2024-11-14T08:37:03Z","content_type":"text/html","content_length":"35632","record_id":"<urn:uuid:d461180a-7ffb-46a6-a790-48a417c368c2>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00616.warc.gz"}
Answered! Good day, Could you kindly help me with the question below. Research GCD algorithms, and analyze the Euclidean algorithm and Steiner's algorithm…. Good day, Could you kindly help me with the question below. Research GCD algorithms, and analyze the Euclidean algorithm and Steiner’s algorithm. How do they work? What is their relative complexity? Do you think one is “better” than the other? What factors influence your opinion? Implement a GCD algorithm. Show your test data and results. When complete, email with subject: GCD, and have a text file with the source code, a text file with the test data, and a text file with the results of running the test data. Write a program to create prime numbers, using iteration, and then again using recursion. Show the first 100 primes in your results file for each algorithm. What are the advantages and disadvantages of using recursion, for example, in the prime number generator? Expected results: 4 program (labs) : 2 GCD, 2 prime number. One each with recursion, and without recursion 1 writeup on Euclidean and Steiner’s Algorithm (1-2 pages) 1 writeup on advantages and disadvantages of recursion for problems such as these (1-2 pages) In general for labs, submit: -program .java source file -test data -test results (text file) -any notes *subject for the email should be the lab being submitted, such as: “GCD – recursion” For writeups, please use a doc or docx file, and submit as an attachment. Kindly add a date, as it will help me keep track of your work. Eg: 170520GCDalgorithm.doc Expert Answer Euclid algorithm: In mathematics, the Euclidean algorithm[a], or Euclid’s algorithm, is an efficient method for computing the greatest common divisor (GCD) of two numbers, the largest number that divides both of them without leaving a remainder. It is named after the ancient Greek mathematician Euclid, who first described it in Euclid’s Elements (c. 300 BC). It is an example of an algorithm, a step-by-step procedure for performing a calculation according to well-defined rules, and is one of the oldest algorithms in common use. It can be used to reduce fractions to their simplest form, and is a part of many other number-theoretic and cryptographic calculations. The Euclidean algorithm is based on the principle that the greatest common divisor of two numbers does not change if the larger number is replaced by its difference with the smaller number. For example, 21 is the GCD of 252 and 105 (as 252 = 21 × 12 and 105 = 21 × 5), and the same number 21 is also the GCD of 105 and 252 − 105 = 147. Since this replacement reduces the larger of the two numbers, repeating this process gives successively smaller pairs of numbers until the two numbers become equal. When that occurs, they are the GCD of the original two numbers. By reversing the steps, the GCD can be expressed as a sum of the two original numbers each multiplied by a positive or negative integer, e.g., 21 = 5 × 105 + (−2) × 252. The fact that the GCD can always be expressed in this way is known as Bézout’s identity. The version of the Euclidean algorithm described above (and by Euclid) can take many subtraction steps to find the GCD when one of the given numbers is much bigger than the other. A more efficient version of the algorithm shortcuts these steps, instead replacing the larger of the two numbers by its remainder when divided by the smaller of the two (with this version, the algorithm stops when reaching a zero remainder). With this improvement, the algorithm never requires more steps than five times the number of digits (base 10) of the smaller integer. This was proven by Gabriel Lamé in 1844, and marks the beginning of computational complexity theory. Additional methods for improving the algorithm’s efficiency were developed in the 20th century package sample; public class Euclid_Recursion { // recursive implementation public static int gcd(int p, int q) { if (q == 0) return p; return gcd(q, p % q); public static void main(String[] args) { int p = 4; int q = 8; int d = gcd(p, q); System.out.println(“gcd(” + p + “, ” + q + “) = ” + d); package sample; public class Euclid_Iteration { // non-recursive implementation public static int gcd2(int p, int q) { while (q != 0) { int temp = q; q = p % q; p = temp; return p; public static void main(String[] args) { int p = 4; int q = 8; int d2 = gcd2(p, q); System.out.println(“gcd(” + p + “, ” + q + “) = ” + d2); Steiner’s algorithm: The binary GCD algorithm, also known as Stein’s algorithm, is an algorithm that computes the greatest common divisor of two nonnegative integers. Stein’s algorithm uses simpler arithmetic operations than the conventional Euclidean algorithm; it replaces division with arithmetic shifts, comparisons, and subtraction. Although the algorithm was first published by the Israeli physicist and programmer Josef Stein in 1967 algorithm reduces the problem of finding the GCD by repeatedly applying these identities: 1. gcd(0, v) = v, because everything divides zero, and v is the largest number that divides v. Similarly, gcd(u, 0) = u. gcd(0, 0) is not typically defined, but it is convenient to set gcd(0, 0) = 2. If u and v are both even, then gcd(u, v) = 2·gcd(u/2, v/2), because 2 is a common divisor. 3. If u is even and v is odd, then gcd(u, v) = gcd(u/2, v), because 2 is not a common divisor. Similarly, if u is odd and v is even, then gcd(u, v) = gcd(u, v/2). 4. If u and v are both odd, and u ≥ v, then gcd(u, v) = gcd((u − v)/2, v). If both are odd and u < v, then gcd(u, v) = gcd((v − u)/2, u). These are combinations of one step of the simple Euclidean algorithm, which uses subtraction at each step, and an application of step 3 above. The division by 2 results in an integer because the difference of two odd numbers is even.[3] 5. Repeat steps 2–4 until u = v, or (one more step) until u = 0. In either case, the GCD is 2kv, where k is the number of common factors of 2 found in step 2. The algorithm requires O(n2)[4] worst-case time, where n is the number of bits in the larger of the two numbers. Although each step reduces at least one of the operands by at least a factor of 2, the subtract and shift operations take linear time for very large integers (although they’re still quite fast in practice, requiring about one operation per word of the representation). package sample; public class Steiner_Recursion { public static int gcd( int u, int v) // simple cases (termination) if (u == v) return u; if (u == 0) return v; if (v == 0) return u; // look for factors of 2 if (u%2==0) // u is even if (v%2!=0) // v is odd return gcd(u >> 1, v); else // both u and v are even return gcd(u >> 1, v >> 1) << 1; if (v%2==0) // u is odd, v is even return gcd(u, v >> 1); // reduce larger argument if (u > v) return gcd((u – v) >> 1, v); return gcd((v – u) >> 1, u); public static void main(String[] args) { package sample; public class Steiner_Iteration { public static int gcd( int u, int v) int shift; /* GCD(0,v) == v; GCD(u,0) == u, GCD(0,0) == 0 */ if (u == 0) return v; if (v == 0) return u; /* Let shift := lg K, where K is the greatest power of 2 dividing both u and v. */ for (shift = 0; ((u | v) & 1) == 0; ++shift) { u >>= 1; v >>= 1; while ((u & 1) == 0) u >>= 1; /* From here on, u is always odd. */ do { /* remove all factors of 2 in v — they are not common */ /* note: v is not zero, so while will terminate */ while ((v & 1) == 0) /* Loop X */ v >>= 1; /* Now u and v are both odd. Swap if necessary so u <= v, then set v = v – u (which is even). For bignums, the swapping is just pointer movement, and the subtraction can be done in-place. */ if (u > v) { int t = v; v = u; u = t;} // Swap u and v. v = v – u; // Here v >= u. } while (v != 0); /* restore common factors of 2 */ return u << shift; public static void main(String[] args) { Why not to use recursion 1. It is usually slower due to the overhead of maintaining the stack. 2. It usually uses more memory for the stack. Why to use recursion 1. Recursion adds clarity and (sometimes) reduces the time needed to write and debug code (but doesn’t necessarily reduce space requirements or speed of execution). 2. Reduces time complexity. 3. Performs better in solving problems based on tree structures.
{"url":"https://grandpaperwriters.com/answered-good-day-could-you-kindly-help-me-with-the-question-below-research-gcd-algorithms-and-analyze-the-euclidean-algorithm-and-steiners-algorithm/","timestamp":"2024-11-14T14:11:15Z","content_type":"text/html","content_length":"51284","record_id":"<urn:uuid:3b2b3408-d33b-433d-9485-ed30afb4a89a>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00850.warc.gz"}
existographies, Johannes Hubritus Cornelius Lisman (c.1900-c.1990) (CR:7), often cited as J.H.C. Lisman, was a Dutch scientist noted, in economic thermodynamics, for [] In 1946, Lisman, in his book Econometrics, Statistics, and Thermodynamics, presented a compilation of various previously published articles. In 1949, Lisman, in his “Econometrics and Thermodynamics: A Remark on Davis’ Theory of Budgets”, in which he picks apart the economic equations, via the employment of the thermodynamic isomorphisms (or mathematical isomorphism) technique, of American mathematician Harold Davis, concluding that none of the variables Davis used in his mathematical economic models seem to play the same role as entropy in thermodynamics. [1] Lisman seems to have co-authored the 1931 article “The melting-curve of hydrogen to 450 kg/cm2”. [2] 1. (a) Lisman, Johannes H.C. (1946). Econometrics, Statistics, and Thermodynamics: a Compilation and Extension of Different Statistical Papers Published in Various Journals. The Netherlands Postal and Telecommunications Service. (b) Lisman, Johannes H.C. (1949). “Econometrics and Thermodynamics: A Remark on Davis’ Theory of Budgets” (abs), Econometrica, XVII, 59-62. (c) Georgescu-Roegen, Nicholas. (1971). The Entropy Law and the Economic Process (pg. 17). Cambridge, Massachusetts: Harvard University Press. 2. Keesom, W.H. and Lisman, J.H.C. (1931). "The melting- curve of hydrogen to 450 kg/cm2," Kon. Akad. van Wetens. Amst. Proc. 34, 598. Further reading ● Lisman, J.H.C. and De Man, R. (1985). “Entropy: Letter to the Editor”, Statistica, 45: 115-17. External links ● Lisman, J.H.C. – WorldCat Identities.
{"url":"https://www.eoht.info/page/Johannes%20Lisman","timestamp":"2024-11-03T22:32:42Z","content_type":"text/html","content_length":"3829","record_id":"<urn:uuid:af56e10a-9b74-4c99-926c-65ba80f99547>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00771.warc.gz"}
Measuring Investment Efficacy Measuring the success of a company’s investment program is both vitally important and very difficult for an owner. It’s vitally important because as alumni of IOI Training Courses know, it is a company’s present investment programs that act as the main driver of the medium-term profit growth at the company. It is difficult because, as owners without inside information about specific projects, there is a lot of guestimation that goes into any analysis. As long-time readers of our work will know, our preferred solution to measuring investment efficacy is to assess historical growth of the company’s profits versus our benchmark (the growth rate of nominal GDP). An article posted here about two years ago explains the process we use in detail. The graph we have used in IOI ChartBooks and other research to express efficacy looks like this: This representation compares the actual growth in profits at Garmin (dark columns) to what profit growth would have been if they moved lock-step with nominal GDP over the period (light columns). Obviously, the graph is very sensitive to the starting point from which one measures – a weakness of the representation about which we’ve always felt uncomfortable. Garmin ($GRMN) grew much more quickly than GDP in the 2007-2009 period, thanks to the rapid acceptance of GPS navigation systems in automobiles. However, if we measured the company’s profit growth from its 2009 peak against GDP, it would have underperformed terribly. GDP growth may have been tepid post financial crisis, but Garmin’s profits (using IOI’s preferred measure of Owners’ Cash Profits or OCP) crashed by about 80%! To overcome this sensitivity to starting points, we started thinking about better ways to represent this. The method on which we finally settled has its weaknesses too, but does more accurately represent historical investment efficacy, in our opinion. First, we start with actual profit in the first year of our historical series and increase that profit at the rate of nominal GDP growth for the remaining time. Then, we compare that series to the actual profits the company generated over that time and count any positive number as an “excess” and any negative one as a “deficit.” Then, we do the same thing starting in year two and keep repeating for the first five years of the 10-year historical series. Once done, we have a “waterfall” of excesses and deficits that looks like this (we use Garmin in this example): You can see that in Garmin’s case, when measured from Year 1 (2006), the company generated excess profits for all years but the last, for a cumulative excess of $2.3 billion. Moving down to Year 2, however, the company generated excess profits for only four of the remaining eight years, generating a deficit of $473 million. After repeating the process for years 1-5, we made a graph of each cumulative excess or deficit and came up with the following: Note that we are only running this analysis for the first five years of our 10-year historical series. This is because it often takes several years for the effects (positive or negative) of corporate investments on the profits of the firm to show up. This analysis is essentially looking at the investments of from 5-10 years ago and judging how those investments influenced profit growth in the subsequent period. This graph accurately represents the poor efficacy of Garmin’s 2006-2010 investments on the profits earned by the firm. It also seems to show that efficacy is rebounding somewhat – something that we found true in the analysis we recently published to IOI members. Over the 2006-2010 period, Garmin spent a total of $427 million on what we term Net Expansionary Cash Flow (i.e., “investments”) and over the entire 10-year period, its rolling cumulative deficit summed up to be ($5.1 billion). In contrast, if the firm’s profits would have grown from each starting year at GDP, owners would have had a rolling cumulative profit of $25.6 billion. Another way of thinking of this is that Garmin generated cumulative rolling profits of around $20.5 billion – “growing” about 20% slower than the economy at large. We use quotation marks around ‘growing’ because these “rolling cumulative” numbers do not express a real amount of money that could be made (because we can’t go back historically and add together profits under different starting conditions and scenarios). However, a comparison of the two numbers gives a good indication of the degree to which the company’s past investments were or were not This analysis and graph seemed valid in Garmin’s case, but we thought we would run the numbers against several other companies for which we have published valuations and see if those graphs resonated in those cases as well. Here are the graphs we created along with commentary on each one. Apple’s ($AAPL) investments in the iPhone franchise in the mid-2000s were a grand slam home run. The firm spent at total of $1.3 billion from 2006-2010, and the cumulative rolling profits generated imply growth that is 360% faster than GDP. IBM ($IBM) has been disinvesting and realigning its business since the financial crisis, and the results have been disruptive, as seen clearly above. The firm spent at total of $17.8 billion from 2006-2010, and the cumulative rolling deficit generated was ($4.0 billion), compared to $492.4 billion under the rolling cumulative GDP growth scenario. This works out to a very small “growth” underperformance of just under 1% less than GDP. Kroger’s ($KR) investments in organics in the mid-2000s have been very successful in boosting profit growth over the past 10 years. The firm spent at total of $2.5 billion from 2006-2010, and the cumulative rolling profits generated was $8.4 billion – an improvement on GDP growth by around 16%. Oracle’s ($ORCL) moves into hardware and applications in the mid-2000s have done a terrific job of generating excess profits for its owners. The firm spent at total of $29.5 billion from 2006-2010, and the cumulative rolling profits generated imply growth 65% times faster than GDP. We think these graphs are a fairly accurate depiction of what we consider their true past investment efficacy, so will continue using these graphs in our analyses and will incorporate them into the suite of investment analysis applications we are in the midst of developing!
{"url":"https://intelligentoptioninvestor.com/measuring-investment-efficacy-2/","timestamp":"2024-11-05T05:32:10Z","content_type":"text/html","content_length":"158838","record_id":"<urn:uuid:cd663ef4-2632-455b-ae6d-2cea0cc82ec4>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00818.warc.gz"}
Midspan Deflection | Deflections in Simply Supported Beams In simply supported beams, the tangent drawn to the elastic curve at the point of maximum deflection is horizontal and parallel to the unloaded beam. It simply means that the deviation from unsettling supports to the horizontal tangent is equal to the maximum deflection. If the simple beam is symmetrically loaded, the maximum deflection will occur at the midspan. Finding the midspan deflection of a symmetrically loaded simple beam is straightforward because its value is equal to the maximum deflection. In unsymmetrically loaded simple beam however, the midspan deflection is not equal to the maximum deflection. To deal with unsymmetrically loaded simple beam, we will add a symmetrically placed load for each load actually acting on the beam, making the beam symmetrically loaded. The effect of this transformation to symmetry will double the actual midspan deflection, making the actual midspan deflection equal to one-half of the midspan deflection of the transformed symmetrically loaded beam. Recent comments • Hello po! Question lang po… 2 weeks ago • 400000=120[14π(D2−10000)] 1 month 2 weeks ago • Use integration by parts for… 2 months 2 weeks ago • need answer 2 months 2 weeks ago • Yes you are absolutely right… 2 months 2 weeks ago • I think what is ask is the… 2 months 2 weeks ago • $\cos \theta = \dfrac{2}{… 2 months 2 weeks ago • Why did you use (1/SQ root 5… 2 months 2 weeks ago • How did you get the 300 000pi 2 months 2 weeks ago • It is not necessary to… 2 months 2 weeks ago
{"url":"https://mathalino.com/reviewer/strength-materials-mechanics-materials/midspan-deflection-deflections-simply-supported-beam","timestamp":"2024-11-10T14:11:36Z","content_type":"text/html","content_length":"56706","record_id":"<urn:uuid:5fe68a8a-4981-48f9-9617-28cf248c7caf>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00022.warc.gz"}
Testing for Granger Causality Several people have asked me for more details about testing for Granger (non-) causality in the context of non-stationary data. This was prompted by my brief description of some testing that I did in "C to Shining C" posting of 21 March this year. I have an of example to go through here that will illustrate the steps that I usually take when testing for causality, and I'll use them to explain some of pitfalls to avoid. If you're an EViews user, then I can also show you a little trick to help you go about things in an appropriate way with minimal effort. In my earlier posting, I mentioned that I had followed the Toda and Yamamoto (1995) procedure to test for Granger causality. If you check out this reference, you'll find you really only need to read the excellent abstract to get the message for practitioners. In that sense, it's rare paper! It's important to note that there are other approaches that can be taken to make sure that your causality testing is done properly when the time-series you're using are non-stationary (& possibly cointegrated). For instance, see (2007, Ch. 7). If you are using a Wald test to test linear restrictions on the parameters of a VAR model, and (some of) the data are non-stationary, then the Wald test statistic does not follow its usual asymptotic chi-square distribution under the null. In fact, if you just apply the test in the usual way, the test statistic's asymptotic distribution involves 'nuisance parameters' that you can't observe, and so it is totally non-standard. It would be very unwise to just apply the test, and hope for the best on the grounds that you have a large sample size. Of course, testing for Granger (non-) causality is just a specific example of testing some zero restrictions on certain of the parameters in a VAR model, so the warning given above applies here. (Parenthetically, you can't get around the problem by using an LM test or an LR test, either.) What I'm going to do is: • Remind you of what we mean by Granger non-causality testing. • Spell out the steps that are involved in applying the Toda-Yamamoto (T-Y) procedure. • Illustrate the analysis with a simple example, including some screen-shots from EViews. • List a few things that you should not do when testing for causality. First, a simple definition of Granger Causality, in the case of two time-series variables, X and Y: "X is said to Granger-cause Y if Y can be better predicted using the histories of both X and Y than it can by using the history of Y alone." We can test for the absence of Granger causality by estimating the following VAR model: Y[t] = a[0] + a[1]Y[t-1] + ..... + a[p]Y[t-p] + b[1]X[t-1] + ..... + b[p]X[t-p] + u[t] (1) X[t] = c[0] + c[1]X[t-1] + ..... + c[p]X[t-p] + d[1]Y[t-1] + ..... + d[p]Y[t-p] + v[t] (2) Then, testing H = ..... = = 0, against H : 'Not H ', is a test that X does not Similarly, testing H = ..... = = 0, against H : 'Not H ', is a test that Y does not In each case, a of the null implies there is Granger causality. Note that in what follows I''ll often refer to the 'levels' of the data. This simply means that the data have been differenced. The series may be in the original units, or logarithms may have been taken ( ., to linearize a trend). In either case, I'll talk about the 'levels'. Now, here are the basic steps for the T-Y procedure: 1. Test each of the time-series to determine their order of integration. Ideally, this should involve using a test (such as the ADF test) for which the null hypothesis is non-stationarity; as well as a test (such as the KPSS test) for which the null is stationarity. It's good to have a cross-check. 2. Let the maximum order of integration for the group of time-series be m. So, if there are two time-series and one is found to be I(1) and the other is I(2), then m = 2. If one is I(0) and the other is I(1), then m = 1, etc. 3. Set up a VAR model in the levels of the data, regardless of the orders of integration of the various time-series. Most importantly, you must not difference the data, no matter what you found at Step 1. 4. Determine the appropriate maximum lag length for the variables in the VAR, say p, using the usual methods. Specifically, base the choice of p on the usual information criteria, such as AIC, SIC. 5. Make sure that the VAR is well-specified. For example, ensure that there is no serial correlation in the residuals. If need be, increase p until any autocorrelation issues are resolved. 6. If two or more of the time-series have the same order of integration, at Step 1, then test to see if they are cointegrated, preferably using Johansen's methodology (based on your VAR) for a reliable result. 7. No matter what you conclude about cointegration at Step 6, this is not going to affect what follows. It just provides a possible cross-check on the validity of your results at the very end of the 8. Now take the preferred VAR model and add in m additional lags of each of the variables into each of the equations. 9. Test for Granger non-causality as follows. For expository purposes, suppose that the VAR has two equations, one for X and one for Y. Test the hypothesis that the coefficients of (only) the first p lagged values of X are zero in the Y equation, using a standard Wald test. Then do the same thing for the coefficients of the lagged values of Y in the X equation. 10. It's essential that you don't include the coefficients for the 'extra' m lags when you perform the Wald tests. They are there just to fix up the asymptotics. 11. The Wald test statistics will be asymptotically chi-square distributed with p d.o.f., under the null. 12. Rejection of the null implies a rejection of Granger non-causality. That is, a rejection supports the presence of Granger causality. 13. Finally, look back at what you concluded in Step 6 about cointegration "If two or more time-series are cointegrated, then there must be Granger causality between them - either one-way or in both directions. However, the converse is not true." So, if your data are cointegrated but you don't find any evidence of causality, you have a conflict in your results. (This might occur if your sample size is too small to satisfy the asymptotics that the cointegration and causality tests rely on.) If you have cointegration and find one-way causality, everything is fine. (You may still be wrong about there being no causality in the other direction.) If your data are cointegrated, then you have no cross-check on your causality results. Now it's time for our example. As usual, the data are available on the page that goes with this blog, and there is an EViews workfile on the page. We're going to take a look at the world prices of Arabica and Robusta coffees. Here's a plot of the monthly data from January 1960 to March 2011 - a nice long time series with lots of It looks as if there may be a structural break in the form of a shift in the levels of the series in 1975. We know that this will affect our unit root and cointegration tests, and it will also have implications for the specification of our VAR model and causality tests. This can all be handled, of course, but rather than getting side-tracked by these extra details, I'll focus on the main issue here, and we'll shorten the sample as follows: Now let's go through the various steps for the T-Y causality testing procedure. The results to back up what I conclude along the way are in the EViews file, which contains a 'Read_me" text object that gives more explanation. 1. Both of the series are I(1) when we apply the ADF and KPSS tests, allowing for a drift and trend in each series. 2. So, m = 1. 3. We set up a 2-equation VAR model in the levels of the data, including an intercept in each equation. 4. The various information criteria suggest that we should have a maximum lag length of 3 for each variable: 5. However, when we then examine the residuals and apply the LM test for serial independence against the alternative of AR( ), for = 1, ...., 12, we find that there are problems. This serial correlation is removed (at least at the 5% sig. level) if we increase the maximum lag length to = 6: This estimated model is also 'dynamically stable': 6. Johansen's Trace Test and Max. Eigenvalue Test both indicate the presence of cointegration between the 2 series, at the 10% significance level: This last result is not going to affect anything we do. 8. As m = 1, we now re-estimate the levels VAR with one extra lag of each variable in each equation. Here is where we need to be careful if we're going to "trick" EViews into doing what we want when we test for causality shortly. Rather than declare the lag interval for the 2 endogenous variables to be from 1 to 7 (the latter being p + m), I'm going to leave the interval at 1 to 6, and declare the extra (7th.) lag of each variable to be an "exogenous" variable. The coefficients of these extra lags will then not be included when the subsequent Wald tests are conducted. If I just specified the lag interval to be from 1 to 7, then the coefficients of all seven lags would be included in the Wald tests, and this would be incorrect. If I did that, the the Wald test statistic would not have its usual asymptotic chi-square null distribution. 9. & 10. Now we can undertake the Granger non-causality testing: 11. Note that the degrees of freedom are 6 in each part of the above image - that's correct: p = 6. The extra 7th. lag has not been included in the tests. 12. From the upper panel of results, we see that we cannot reject the null of no causality from Robusta to Arabica. From the lower panel we see that we can reject the null of no causality from Arabica to Robusta, at the 10% significance level, and virtually at the 5% significance level as well. In summary, we have reasonable evidence of Granger causality from the price of Arabica coffee to the price of Robusta coffee, but not vice versa. Some things to watch out for: • Don't fit the VAR in the differences of the data when testing for Granger non-causality. • If you are using a VAR model for other purposes, then you would use differenced data if the series are I(1), but not cointegrated. • If you are using a VAR model for purposes other than testing for Granger non-causality and the series are found to be cointegrated, the you would estimate a VECM model. • The usual F-test for linear restrictions is not valid when testing for Granger causality, given the lags of the dependent variables that enter the model as regressors. • Don't use t-tests to select the maximum lag for the VAR model - these test statistics won't even be asymptotically std. normal if the data are non-stationary, and there are also pre-testing issues that affect the true significance levels. • If you fail to use the T-Y approach (adding, but not testing, the 'extra' m lags), or some equivalent procedure, and just use the usual Wald test, your causality test results will be meaningless, even asymptotically. • If all of the time-series are stationary, m = 0, and you would (correctly) just test for non-causality in the 'old-fashioned' way: estimate a levels VAR and apply the Wald test to the relevant • The current Wikipedia entry for Granger Causality has lots of things wrong with it. In particular, see the 'Method' & 'Mathematical Statement' sections of that entry Finally, if you want to check things out some more, I've put a second EViews workfile, relating to the prices of natural gas in Europe and the U.S., on this blog's page. In that file you'll find a "Read_Me" object that will tell you what's going on. Note: The links to the following references will be helpful only if your computer's IP address gives you access to the electronic versions of the publications in question. That's why a written References section is provided. , H. (2006). New Introduction to Multiple Time Series Analysis . Springer, Berlin. Toda, H. Y and T. Yamamoto (1995). Statistical inferences in vector autoregressions with possibly integrated processes. Journal of Econometrics , 66, 225-250. 406 comments: 1. Very interesting and thorough explanation. Do you know and can you elaborate on how the Group Statistics/Granger Causility Test command differs from the above procedure and whether it is safe to use? I tried a bit and get different results with both methods. 2. Marvin - thanks for your comment and question. Using the commands you asked about, the extra "m" lags don't get included in the VAR model. If you use that approach and specify, say, p=4, then 4 lags of each variable get included in each equation of the VAR, but ALL 4 of them then get tested for G-causality. This is OK if every variable in the model is stationary, but not otherwise. I hope this helps. 3. Dear Professor If my time series are I(0) and I(1). It is correct to use level of data when test for Granger non-causality. There are no need to difference of the data I(1). Is my understand correct or not? 4. Anonymous - that's right. You use the levels of BOTH variables. But you MUST then follow the Toda-Yamamoto procedure by adding ONE extra lag of both variables in both equations, but you DON'T include this extra lag in the causality tests. Now, this is all to do with causality testing. If you were wanting to estimate a VAR for some other reason, such as testing, then you difference the I(1) variable. For overall consistency in this case you`d probably want to difference the I(0) variable too. If you difference an I(0) variable it is still stationary. The risk is you may introduce (negative) autocorrelatioj into the errors because of over-differencing one of the variables. But you easily test for this, and you can usually get rid of it by just adding one or more extra lags of one or both variables. I hope this helps! 5. HI PROFESSOR...THIS IS MUHAMMAD MERAJ FROM PAKISTAN, KARACHI... I NEED UR HELP regarding granger causality testing.. if all of my variables are i(i) even than i should use the above mentioned procedure as suggested by Toda n Yamamoto... please explain. muhammad meraj 6. Muhammad: Yes, if all of the variables are I(1), and whether or not they are cointegrated, you need to use the Toda-Yamamoto procedure (or some equivalent one such as that proposed by Helmut Lutkepohl. Take a look at the T & Y paper - just read the abstract - it is very clear and easy to follow. 7. Dear Sir, Just a brief comment. If I am not mistaking, with only two variables (and providing that they are cointegrated) Granger causality testing could be done by transforming VAR in levels to its ECM representation and using Wald test for joint significance of EC term and lagged differenced variables. However, when one is considering larger systems T-Y should be used. 8. Anonymous- Thanks for the comment. Actually, it's got nothing to do with the number of variables in the system. You can transform a multivariate VAR into a VECM. The trouble is that the limit distribution of the Wald test statistic will not be chi square if ANY variable in the VAR is I(1), whether or not any of the variables are cointegrated. The tst statistic's limit distribtion has nuisance parameters in it. In addition to this main point, consider the following. What if one variables is I(1) and one is I(0), in which case they can't be cointegrated? What if there is some uncertainty about the outcome of the cointegration tests? In these cases the T-Y(or equivalent) methodology is the way to go. All you need to know is the maximumoder of integration among the variables in question. The point is to alter the probelm in a way that ensures that Wald statistic has its usual asymptotic distribution. 9. Dear Mr Giles, Thank you very much for perfect instruction! I hope you will answer my question. If there is a structural break like in your data before you have cut them how it changes the T-Y results? Unfortunately I could not cut my data because I have sample only from 2006 and there is a structural break due to the crisis. I know that ADF test has lower power but as far as I understand in the T-Y procedure one should find the largest possible lag of integration. So low power is not a drawback. Best regards Nataliya Gerasimova 10. Dear Mr. Giles, Thank you for your response. As I remember there is a paper by Toda and Phillips about this issue in which they are talking about "sufficient cointegration condition" that makes usual distribution valid? Also, for two variables it is easy to test the condition. That is what my first post was about (I apologise for not making it clear). In addition, when using two-step OLS procedure (and not Johansen ML) in which EC term estimated in first step is included in short run equation usual F-test or Wald-test should in my opinion be valid, because testing now includes only stationary variables (again assuming that cointegration holds). Looking forward to yor comment. Thank you for your time. Best regards, 11. Nataliya: If you have a short sample with a structural break, the ADF test has 2 problems: low power due small sample size; and a tendency to "discover" unit roots that aren't there, due to the structural break. Both of these will lead you in the direction of concluding the series is I(1), whether it is or not. Then you would use the T-Y procedure. The only thing I would change is to include a dummy variable (or variables) for the break in the equation in the VAR that "explains" the variable with the break. I hope this helps. 12. Goran - thanks for the comment. You are right about the Toda-Phillips result. However, things don't work for the Wald test in the case of a VEC or VECM model unless you take special steps - equivalent to T-Y. Just because very variable is stationary, this doesn't guarnatee that the limit distribution of the usual Wald test statistic is chi-square. Take a look at the paper by Dolado and Lutkephol in 1996 "Econometric Reviews", for example. The following link may also be helpful: I hope this helps. 13. Thank you for a fast and helpful answer! 14. Der Professor, When time series are cointegrated, Could we perform impulse response in VAR? And also could you shortly let me know which situation we can use SVAR? Thank you very much, 15. Dear Professor, Some argued that T-Y approach is less powerful than the Toda and Phillips apporach and it is also inefficient, as the order of the VAR is intentianally set too large. Then other than t-y process, is there any new way to conduct granger causality test in nonstationay VAR? 16. Dear Professor, How should I run the VECM model if ADF test show that there is one variable of at least I(2)? How could I ensure that the variable is 100% I(3)? Thank you very much Kind regards, 17. Dear Professor, thank you for th valuable info provided by you on ur blog. if i want to apply granger causality test on the volatility index data. in which vol index is independent variable I(0) and stock market index is I(1). then what should be the appt prodecure? kindly reply. 18. Dear Professor, I need help with Granger Wald interpretation. Null Hypothesis chi2 df Prob > chi2 MCAP does not cause GDP 33.469 2 0.000 TR does not cause GDP 33.039 2 0.000 TVST does not cause GDP 31.926 1 0.000 SM does not cause GDP 37.796 4 0.000 GDP does not cause MCAP 0.96081 2 0.619 GDP does not cause TR 11.559 2 0.003 GDP does not cause TVST 6.86 2 0.032 How would you do interpret each coefficient. And say which one has causality? Yours Affectionately, 19. Harjevan: Consider the hypothesis, "GNP does not cause MCAP". The p-value is 61.9%, which is very large. It means that the probability of seeing a value for the test statistic of 0.96081 (your value), or larger, if the hypothesis is true, is 61.9%. So, what you have observed is quite a likely event, if the null hypothesis is true. Accordingly, I would NOT reject the hypothesis of "no causality". In all of the other caes the p-values are essentially zero. You have observed events (values for the test statistics) that are very rare if the null hypothesis is true. But you HAVE observed them! SO, in all likelihood the null hypothesis is not true - I'd REJECT the hypothesis of "no causality" in each of these other cases. SO there IS causality frm MCAP to GDP (for example), but not the reverse. I hope this helps. 20. Ruhee: You have a variable that is I(1). If you havbe ANY variables that are non-stationary, the Toda-Yamamoto procedure that I described in detail is appropriate. So, just follow the example. 21. Ben: If the data are cointegrated, I'd prefer to do the impulse response analysis using a VECM rather than a VAR. 22. Ben: Helmut Lutkepohl has an alternative method to T-Y. You might want to take a look at his book "New Introduction to Multiple Time Series Analysis", and check his website at http://www.eui.eu/ 23. Dear Prof. Giles, thank you for yor clear explanation. I would like to say if the T-Y procedure is also valid if we consider a dummy such as exogenous variable in the VAR construction. Thank you. 24. Dave: The procedure is exactly the same if dummy variables appear as exogenous variables in the model. Of course, these variables to not need to be tested for non-stationary. They will always be 25. Dear Professor Giles, First, congratulations for your inspiring thoughts and useful blog!!! My name is Peter and I am a PHD student in political economy, and my PHD thesis's subject is "determinants of bank loan supply and demand in Bulgaria for the period 2000-2010" I am running a regression using a VECM (all my data is time series, i.e. nonstationary at the levels and stationary at the first difference, guided by ADF and PP tests). As expected most of my variables (GDP, Gross Value Added, Gross investments, CPI, GDP deflator, salaries, loans stock, loans new business volume, deposits, bank balance sheet data, interest rates, etc. are trending, nonstationary in levels. next I test for cointegration using the embedded in EViews 5.0 Johansen Cointegration test. I am assuming all my demand and supply determinants to be endogenous loan variables including, and I am experimenting with different lags and combinations (keeping the economic logic of signs of coefficients)…So I am struck with the following three problems: -The Johansen Cointegration test for example shows that there are three cointegration vectors (rank=3), can I run a VECM model with only one error correction equation in which the credit variable is explained by the other 4 variables in the regression, skipping the fact that 3 cointegration vectors are assumed by the Johansen test? Concerning the error correction term, I know there are interactions between endogenous variable, but since I am interested only of loans as dependent in the long term, can I omit the other two cointegrating vectors, in which loans are not included? -If the Johansen Cointegration test shows me 5 cointegrating vectors for a 5 variables test, (rank=5, having 5 variables), does this signal a spurious regression and misspecification? -Assuming that everything is ok with the cointegrating equations, but it happens in the short term model lagged variables are changing signs of coefficient for the same variable. (t-stat are with high values, signaling that coefficients are different from zero and lagged variables can not be omitted) For example loan demand is positively related to lnGDP (in the cointegrating equation and in the first and second lags in the short term model, but the third and fourth lags of LnGDP are with negative signs, and still with high t-stas- how is this interpreted? Thanks for your time and consideration, 26. Dear Professor Giles, First, congratulations for your inspiring thoughts and useful blog!!! My name is Peter and I am a PHD student in political economy, and my PHD thesis's subject is "determinants of bank loan supply and demand in Bulgaria for the period 2000-2010" I am running a regression using a VECM (all my data is time series, i.e. nonstationary at the levels and stationary at the first difference, guided by ADF and PP tests). As expected most of my variables (GDP, Gross Value Added, Gross investments, CPI, GDP deflator, salaries, loans stock, loans new business volume, deposits, bank balance sheet data, interest rates, etc. are trending, nonstationary in levels. next I test for cointegration using the embedded in EViews 5.0 Johansen Cointegration test. I am assuming all my demand and supply determinants to be endogenous loan variables including, and I am experimenting with different lags and combinations (keeping the economic logic of signs of coefficients)…So I am struck with the following three problems: -The Johansen Cointegration test for example shows that there are three cointegration vectors (rank=3), can I run a VECM model with only one error correction equation in which the credit variable is explained by the other 4 variables in the regression, skipping the fact that 3 cointegration vectors are assumed by the Johansen test? Concerning the error correction term, I know there are interactions between endogenous variable, but since I am interested only of loans as dependent in the long term, can I omit the other two cointegrating vectors, in which loans are not included? -If the Johansen Cointegration test shows me 5 cointegrating vectors for a 5 variables test, (rank=5, having 5 variables), does this signal a spurious regression and misspecification? -Assuming that everything is ok with the cointegrating equations, but it happens in the short term model lagged variables are changing signs of coefficient for the same variable. (t-stat are with high values, signaling that coefficients are different from zero and lagged variables can not be omitted) For example loan demand is positively related to lnGDP (in the cointegrating equation and in the first and second lags in the short term model, but the third and fourth lags of LnGDP are with negative signs, and still with high t-stas- how is this interpreted? Thanks for your time and consideration, 27. Dear Professor Giles, thanks a lot for this interesting blog! Regarding your coffee example I was wondering about one step in your procedure: You showed that the inverse roots are inside the unit circle, which implies stability of the model. But I am not sure what this fact should tell me? Is there then a contradiction to the unit roots test in the beginning (i.e. can the model be stable when the series are all I(1)?) Thanks a lot! Best regards, 28. Paul: Thanks for the interesting comment. I don't think there's any conflict here. In the case of the unit root testing, and the finding that the data are I(1), the underlying model is an AR(1) model, and we find that we can't reject the hypothesis that the autorcorrelation coefficient is unity. When we get the VAR model we have a much more complex underlying process. We now a bivariate AR(6) process, and when we estimate it this model is found to be dynamically stable. To me, the explanation lies in the the fact that we two completely different models. If you use my EViews code and estimate a 2-equation model with lag s of length one in each equation, the inverse roots are 0.991 and 0.983. Indeed, the estimated coefficient on the own-lag for Arabica is 0.994 (se = 0.0215), so the t-statistic for testing that the coefficient is unity is -0.28. We can't reject a unit root. In the case of Robusta, the corresponding numbers are 0.9796 (0.0164), t = -1.24, giving the same conclusion. I hope this helps! 29. Dear Professor Giles, thanks for the quick response and the good explanation. Am i right in assuming that stability or unstability of our model does not make any difference with regard to the G-causality tests? In other words, if it turns out that my model is unstable, I will nevertheless proceed as usual? I think in most papers stability is not checked at all as only stationarity of the process matters, or? Best regards, 30. Paul: You are correct that a lot of people don't check the dynamic stability of the model in this particulr context. It is obviously something that is crucial if, say, your objective in estimating the VAR was to generate forecasts, or look at impulse response functions associated with policy shocks. Strictly speaking, the proof of the Toda and Yamamoto does not rely on the VAR being dynamically stable, so yes, you could still go ahead as described in the event that it was not. However, personally I still like to check this out for the following reason. If there are inverse roots outside the unit circle then this suggests that the VAR is in some sense mis-specified, and I don't like to apply the test in the context of such a model I find, invariably that the issue of non-stationary roots can be resolved by adjusting the maximum lag length in the mmodel 31. Dear Mr. Giles, First of all many thanks for the clear explanation of the workings of Granger causality. I am currently working on a VECM for my thesis in which I study the linkages between energy consumption and a number of economic indicators. I have two questions: 1) A number of similar studies report the sum of the lagged coefficients of the VECM as the sign of the Granger causality (calculated with Joint Wald Chi-square). What does the sign of the causality imply w.r.t. the relationship between the variables? Does the sign of the Granger causality even matter at all? 2) I would like to perform impulse response analysis. However, Eviews does not provide confidence intervals. How can I obtain p-values or confidence intervals to show the significance of the impulse responses? Thanks in advance for taking the time to respond. Kind regards, Nick from Netherlands 32. Nick: Good questions. Question 1: It's not clear to me that the sum of the coefficients really tells us the "sign" of the causality. There are all of the dynamic effects between the equations that have to be taken into account, and that's precisely what an impulse response function does. If the IRF is potitive for all periods, fading away to zero, I'd say that's a postive "sign" for the causality. If it is positive, then negative, and then dampens down, I'd say that the "sign" depends on the time-horizon. Whether or not the sign matters for the causailty is dependent on the context, I think. If we have a 2-equation mmdel for income and consumption, and the IRF for consumption responding to a shock in income is not positive everywhere, I'd a bit worried, personally about the specification of the lags in the model, etc. In other situations the "direction" of the causality may be all that is of interest. Regarding your second question - you're right. EViews does this for the VAR impulse responses, but not the VECM ones. Grrrr! You're not the only one to be asking. See http://forums.eviews.com/viewtopic.php?f=5&t=4952 My best answer is to bootstrap them. This is what is done in Helmut Lutkepohl's software, JMulTi. See: http://www.jmulti.de/download/help/vecm.pdf I hope this helps - just a little! 33. Nick: A follow-up; There is a step-by-step description of bootstrapping confidence intervals for IRFs from VECMs in the following paper: A. Benkwitz & H. Lutkepohl, "Comparison of Bootstrap Confidence Intervals for Impulse Responses of German Monetary Systems", Macroeconomic Dynamics, 2001, 5, 81-100. 34. Dear Professor Giles, I quite couldn't get this part : you leave the interval at 1 to 6, and declare the extra (7th.) lag of each variable to be an "exogenous" variable, Could you give me more explanation why you did it? Is it a similar concept with dummy variable trap? 35. Jasmin: Because the highest order of integration among the series is I(1), we need to add one more lag of each variable, beyond the 6 lags that we've already decided upon. It's CRUCIAL that the coefficient on this extra lag is NOT included in the Wald test for non-causality. (See steps 8 & 9 in the post.) Now, this poses no problem. However, if you want to use the "built-in" Granger causality test in EViews, you have to use a "trick" to ensure that only 6 lag coefficients are included in the test, and not all 7. The way to do this is to sya you are using lags 1 to 6 in the lag langth" bos, and then add the 7th lags in the extra "exogenous variables" box. This is an EViews-specific situation. You could, of course, fit the VAR with 7 lags, and then select "VIEW", "Coefficient tests", "Wald Test", and specify the six coefficents that you eant to test. This would take a bit more work, but gives identical answers. Doing it the way I suggested gives you ALL of the causality tests in one hit. 36. Dear Mr. Giles, Many thanks for the quick response and the Lütkepohl references. I don't quite 'get it' though, how to perform the bootstrapping of the confidence intervals in Eviews. I guess I should settle for the Wald Chi-square tests for Granger causality (I can explain the majority of the results on the basis of economic reasoning), and merely use the IRFs as a point of reference for the 'sign' of the relationship. Is it right for me to use the IRFs in such a manner? Or would you suggest to not discuss the IRFs at all, seeing as though I cannot provide coinfidence intervals/ significance levels (thus no empirical evidence). Nick from Netherlands 37. Nick: I'd definitely include the IRFs, even without the confidence intervals. To construct the intervals you'll have to write an EViews program to go through the steps I referred to previously. You certainly can't "trick" EViews into doing it. I'm afraid I haven't written a program myself - I've never had the need to date. 38. Nick: Why not just run your data through the JMulTi package? You can download it for free from http://www.jmulti.de/download.html 39. Nick: GrrrrrrrR! I just downloaded & installed JMulTi. It doesn't do IRFs with a VECM! I'm beginning to feel that I may have to do some programming! 40. Well unfortunately programming isn't really my forte, nor is econometrics to be honest. It took me quite some time to get where I'm at right now in terms of understanding the workings of VAR models and cointegrated data. I will include the IRFs in the study, since they do provide useful information. I would like to thank you for being actively involved with solving my issues! And if, by any chance, you might find a solution to our IRF confidence interval issue I am looking forward to reading about it on your blog. Regards, Nick 41. Dear Mr. Giles, First of all,thank you for your helpful Blog second,I want to investigate the relationship between exchange rate and stock market index in Malaysia using daily time series from 2005 to 2011. i have three Time seires variables which i transformed them in to log.Stock index,Exchange rate and gold price.i used ADF and KPSS test in Eviews and the result showed that that are integrated of order one I(1).then i applied Johansen cointegration test . VAR test and lag lengh criteria showed that AIC=3 and LM=6 for maxlag=12 So i used 3 lags but got no cointegration .i rad somewhere that if your equation has break it might give you faulty results.so i want to know how to test my cointegration test for structural break in Eviews and if my lag selection is correct? 42. Hello! I think JMulti is able to do IRFs with a VECM and also bootstraps the corresponding confidence intervals: - Import your data - VECM Analysis -> Select time series and specify your model - Structural Analysis -> Impulse response analysis -> Bootstrap Confidence Intervals (e.g. Hall) - Display impulse responses Best regards, 43. Paul: Thank you - you're absolutely right! I'll post a short separate item on this today. 44. @Anonymous: I've already posted twice on the topic of cointegration testing when there may be structural breaks. I hope this helps! 45. Dear Paul The software that you mentioned was useful.thank you so much for that.I have another favor to ask i have done the test as you told me and i have the results but iam having a hard time interpreting them.i was wondering if you have time to take a look at them. i dont know if my VECM have a break of not.and if it has a break then for which observation? i have posted my result in the link below: 46. Hello Dave, I'd like to commend you on the excellent explanation of the VAR. Question: While checking for serial independence, how did you settle for 6 lags using the LM statistics? I tried using 5, 4,3 lags, but the p-values were not consistent in all cases. How did you arrive at 6? Thanks and keep doing the great job you are doing! 47. @Anonymous: Thanks for the comment! When I look back at what happens with 3, 4, 5 lags there are always some very small p-values for the LM test at low-order lags (of the autocorrelation function). I went to 6 lags to be conservative. I'd rather over-fit the model than under-fit it. Hope that helps. 48. Dave, Thanks for the quick response. I see the small p-values at low order lags- But what made you pick 6? Was there a particular p-value that made u stop at 6? Secondly, by small p values, I assume you mean values close to zero. 49. @Anonymous: That's the thing with p-values - the choice is subjective. A value close to zero implies there is a very low probability of observing the actually observed value of the test statistic, if the null is true. But we HAVE observed it, so we then reject the null (of independence, in this case). I focussed on the short lags in the autocorrelation function - very small p-values when the alternative is autorcorrelation of orders one, two three,...suggests model mis-specification (e.g., through the omission of variables - lagged values in the case of a VAR). 50. Very informative piece on VAR. I have a simple question, I want to construct an unrestricted VAR on 3 variables: hunger incidence (indicator of food security), rice price (measure of access) and rice yield (measure of productivity). If you construct a correlation matrix, the value for rice price and yield is 0.6. Does correlation even matters in a VAR framework? Many thanks! Lenard (Philippines) 51. Lenard: Thanks for the comment. Frankly, I wouldn't be looking at that correlation. 52. Just to confirm, so correlation (that means thru correlation matrix) is not an issue in VAR? It is not relevant? Or do you mean that my choice of 3 variables need to be improved? Thanks again! Lenard (Philippines) 53. Lenard: correlation is not relevant. 54. Thank you for your help and clarification! Lenard (Philippines) 55. Dear Mr. Giles, I'm back again with a new question about the interpretation of the VECM estimates. I'll try to keep it short. 1) As previously described on this blog I use Wald Chi-square to test for short-run (Granger) causality between 6 endogenous variables (in VECM context). 2) I test for long-run causality by testing the adjustment coefficients of the error correction terms (ECT, four of them to be precise). This is where it gets tricky; I have 4 ECTs and 6 simultaneous equations. Does an ECT have any indicative value if it's adjustment parameter is insignificant? I am trying to figure out the interaction between long-run causality and long-run equilibrium relationships, but I have to admit that I'm quite puzzled. Kind regards, Nick (Netherlands) 56. Hi Mr. Giles, Thanks for the thorough explanation on the causality test in nonstationary framework. I see that you used EViews to demonstrate your method. Can you also demonstrate it using R? It will be very helpful. Thank you. Ryan (Indonesia) 57. Dear Mr. Giles, Following step 8,9, and 10 do we have to check the stability of the model again? If so, what can we do if the model is not stable? 58. Dear Mr. Giles, Thank you for all great scientific information that you share with us. Can I ask you explain SVAR model and its application in brief. 59. Dear Dr. Giles, and anyone here on this thread. The Phillips Perron and the ADF critical values are the same correct? Your response is highly appreciated 60. Anonymous: the ADF and PP critical values are the same. 61. Anonymous: re. SVAR's - I'll see what I can do when I get back from my current travels. 62. Ryan: Thanks for the comment. I'll see what I can do when I get back from my current trip to N.Z. 63. Razi: Thanks for the comment. No, I wouldn't be worrying about stability after steps 8, 9 and 10. The T&Y approach requires that the model be "properly specified" in the levels before you add the extra lag(s) to allow for the unit roots. That's all. I hope that helps. 64. Dave, Nice blog you have here: question: when checking for unit roots, I seem to be getting very high Positive values (with prob 1.000) this is very abnormal i suppose. How do I solve this? 65. Anonymous: Any positive value for the ADF statistic leads to NON-rejection of the null hypothesis that there is a unit root. It's very common. There is nothing to "solve". The data are simply non-stationary. First-difference the data and in all likelihood the series will then be I(0), implying that the original series was I(1). 66. Thanks for the response relating to the positive ADF values - When checking whether to reject the null or not, we are checking values in absolute terms right? E.g. We can reject the null that there is unit root if a series has a t-statistic of 5.091214 (with p-value 1.000) where the critical values are -4.2349, -3.5403, -3.202445.. 67. Anonymous: NO! Its a 1-sided test. The critical values are always negative, so in the case of a positive t-statistic, you know immediately that you WON'T reject the null of a unit root. 68. Thanks Dr. Giles... Keep up the blog and the excellent work! 69. Using Johansen cointegration test in Eviews, can I include structural breaks as exogenous variables to account for breaks in the series? 70. Sal: See this post - as well as the 2 earlier ones that it mentions. 71. Dear Professor Giles: Thank you for the excellent information; very helpful. However, I have a question about the number of lags in Johansen cointegration test. Suppose that I tested for cointegration between two series that have structural breaks without considering the breaks and determined the number of lags to be, for example 5. When considering the breaks, do I have to go back and determine the number of lags? In other words, would including the breaks affect the number of lags, or I should be using the same number of lags as in the case without breaks, that is 5. 72. Anonymous: Thanks for the excellent question. Ideally, I'd go back and re-consider the number of lags. If in doubt, include extra, rather than fewer, lags. 73. Dear Mr.Giles. I am Aditya Bhan. I am doing my post-grad in quantitative economics. In case of VECM, the significance of the error correction term helps us to conclude upon long run causation. Could you please outline the procedure for inferring on long run causation in the case of unrestricted VAR model? 74. Dear Professor Giles: Thanks for the helpful comments. For the world prices of Arabica and Robusta coffees example that you illustrated, if you used the full sample from January 1960 to March 2011 to test for Granger causality, do we include a dummy variable (D= 0 from January 1960 to December 1975, and 1 from January 1976 to March 2011) for break in the “exogenous variables” box as C Arabica(-7) Robusta(-7) D Or, need to specify or change something? 75. Sal: Thanks for the comment - yes, that's what I'd do. 76. Thanks so much for the prompt response. 77. Dear Professor Giles, Thanks for your excellent explanation about Granger Causality test. I am going to test the causality between two variables during the economic recessions. I have data for a long period including the recession periods. Please let me know how I can use all my sample data and test causality just for the recession periods. Many Thanks in advance 78. dear professor Giles, thanks for your assistance, please how can i use Autoregressive Distributed Lag ARDL) bounds testing approach to investigating the existence of cointegration relationship among variables. dele 79. bamidele: I'll try and put together an example using EViews and gretl over the next few days. 1. Dear Prof. Giles, I was wondering if it is possible to demonstrate with an example on how to carry out non-linear Granger-Causality test between two variables. I do have some thoughts but, I am not sure whether it’s correct. Bierens (1997) argues that the presence of structural breaks might imply broken deterministic trends, which is a particular case of a non-linear time trend. He suggests approximating broken time trends by non-linear trends. Based on this, I was wondering if adding dummy variables (to account for structural breaks) in the “Exogenous Variables” box in VAR specification in Eviews, and then carrying out Granger-Causality test would be considered non-linear test in Bierens’ sense. Bierens H. (1997). Testing the unit root with drift hypothesis against nonlinear trend stationarity, with an application to the U.S. price level and interest rate. Journal of Econometrics, 81, 29-64. Could you please advise. Many thanks, 80. Dear professor Giles, I am from the Philippines and is currently in my undergraduate studies in economics. I am doing a thesis using time series data and I would like to ask you some questions about the johansen cointegration test. It was not thoroughly discussed to us and I'm having a hard time conducting the test in Eviews 4. We weren't advised to use the Engle cointegration test. Is it possible that you can give me some pointers as to how to conduct the test in eviews and how may I be able to interpret it? It will be very helpful for my study.I admire how concise and specific you are in explaining the methodology in econometrics.This will be deeply appreciated. Thank you. 81. Respected sir, can you provide me the steps to find the toda & yammamto ardl cointegration. 82. Imran: Thanks for your comment. I am in the process of preparing a couple of posts on ARDL models and using Pesaran's ADRL test for levels relationships. Keep watching! The T-Y procedure is related only to testing for Granger causality - not testing for cointegration. 83. Dear Prof. Say if you want to test if nominal price of USD/EUR Granger causes nominal oilprice (WTI). The timeseries are of course I(1), and they're co-integrated. How can you perform a Granger-test on this data? The T&Y method you've described here is a little bit too complicated for my work. Is there any way to test Granger-causality with the usual F-statistics? If so, should you test i level or dlog? And how do I which lags to include? Thank you for a great blog! Best Regards 1. Richard - thanks for the comment. If the data are integrated or cointegrated then there are no short-cuts. The usual F-test will fail, even asymptotically, That's precisely why you need to use something like the T-Y procedure. Choosing the lag length by minimizing the Schwartz criterion is simle, and is "constistent. It will choose the correct lag length with probability one if you have a large enough sample size. You can't use the t-statistics on the lag coefficients to select the lag length, for the same reason that the F-test fails. 2. Thank you very much for the quick answer! I'm currently using the program OxMetrics "PcGive", I don't know if you're familiar with it, but I have not found a way to test a VAR model for Schwartz/AIC. Is there any other way to determine lags, and then manually create the model in the program, and then use the chi2 table to test for signif.level? And one more question, if I may: Say, if you want to test Granger-causality on two co-integrated time series which happens to NOT be in level (i.e. if you want to test for Granger causality in two variables that are in %-changes (dlog)). isn't this possible then? Sorry if this was confusing! 3. Richard: To answer the last part of your question - no, this is NOT O.K. You still need to use the T-Y procedure (or its equivalent), and this requires that you fit the VAR in the levels - for the causality testing exercise. Otherwise the usual Wald (chi-square) test won't be asymptoticaclly shi-squre distributed. 84. Actually, I found a way to test for SC! But I have to test every lag manually and then work my way down... Any tips on where to start? 10 lag, 12 lag? It's monthly data over a 12 year period. This will be my last question: When I've decided the number of lags, there is no way to test Granger causality in PcGive like it is in Eviews. Is there a manual way of finding the chi2 value? Thanks again for the great service and quick response! 1. Hi: In this case I'd search over zero to about 12 lags for SIC minimization. If the min. was at 12 though, I'd obviously then consider more lags and check their SIC's. Re. the Chi-square: just use the result that if you have the F-statistic with p and (n-k) degrees of freedom (say), then (pF) is asymptotically Chi-Square with p degrees of freedom. I hope this helps! 85. Dear Prof, 1. There are papers using T-Y procedures perform post-VAR(p+d) diagnostic tests such as adj-r squared, B-G test, Ramsey test etc. I am curious about that. I am using STATA to test for VAR(p+d) T-Y type. But I dont do any any post tests because the tests (varnorm, varstable, varlmar -- in STATA) suggest VAR(p) instead of VAR(p+d). I dont know how perform those test (B-G etc.) in STATA after VAR unless I regress each equation in VAR separately. What do you think about this? 2. To test for wald test, I use 'test' command after VAR(p+d) e.g. VAR(2+1) test[depvar]l1.indvar [depvar]l2.indvar=0. This test gives me the p-value. I hope this correct. 3. Now, recent papers used generalized IRF. Could you suggest any software to perform this? Or could be any tricks? Thank you. 86. ADIB: Thanks for your comment. If you're doing T-Y in EViews, all of the usual diagostic tests are fully available, so that's easy. I'm not a STATA user, so I can't help you there, or with question 2, I'm afraid. And I'm afraid I don't have any tricjs up my sleeve with respect to your last question. Sorry to be of little help! 87. Dear Mr. Giles i read some articles about T-Y procedure. Some of them used the SUR metedology. How can i know which metedology i'll use? Best Regards Ahmet Gün. 88. Dear Prof. Giles, I was wondering if it is possible to demonstrate with an example on how to carry out non-linear Granger-Causality test between two variables. I do have some thoughts but, I am not sure whether it’s correct. Bierens (1997) argues that the presence of structural breaks might imply broken deterministic trends, which is a particular case of a non-linear time trend. He suggests approximating broken time trends by non-linear trends. Based on this, I was wondering if adding dummy variables (to account for structural breaks) in the “Exogenous Variables” box in VAR specification in Eviews, and then carrying out Granger-Causality test would be considered non-linear test in Bierens’ sense. Bierens H. (1997). Testing the unit root with drift hypothesis against nonlinear trend stationarity, with an application to the U.S. price level and interest rate. Journal of Econometrics, 81, Could you please advise. 89. Thanks for the suggestion. I'll see what I can do! 90. Dear Prof. Dave, Thank you for interesting information. I have a question and look for your kind help. I'm doing Y-T for my paper. Normally in empirical studies, we always transform the series to logged variables and take log to acquire first log difference to get growth rate of the data. However, the series I got, which are trade balances, are negative for many years, thus I could not take log. Is it possible that I could just enter the data in level (without log) and take their first difference to achieve stationary data, enter them into my models. I know it's unusual in published articles. But, is it possible to do that? Thank you very much. 1. Thanks for the comment. There is actually nothing unusual in using the original data rather than their logarithms. We would often do this with interest rates, for example. One reason for taking logs is often just to linearize the upward trend in the data. 2. Thank you very much for your prompt reply, Prof. Dave. I think the main reason for taking log of the series is if they are non-stationary, we have to take first difference of the logged series (usual cases in papers in my research field) to enter them in VAR models. If we do so, we can then interpret the coefficients as elasticities, thus more economically meaningful. If we just keep the original series and take first difference to get a stationary process, and we enter the first differenced data into VAR models then the coefficients are meaningless. And to check whether the results are economically plausible, it would be necessary not only to check with causality direction is stat significant but also to check whether the coefficients are reasonable. Kindly advise me if I'm right. Thank you. 3. Actually I'm doing T-Y test for annual relationships among the variables (I used monthly data and would like to see how the relationship changes year over year). Thus I divide the whole sample (13 years) into 13 sub-samples corresponding for 13 years. Pls, kindly take a look at short questions of mine. - If the series is I(0) (or I(1)) for the whole sample, does it imply the series is also I(0) (or I(1)) for any small sample (i.e. any year)? - To find the maximum order of integration (step 2), can I use the result for the whole sample to apply for each sub-sample? - Basically, all the steps in Y-T procedure that you described, can I use the result of the whole sample for sub-sample testing procedure, or I have to do for sub-sample? Could you kindly advise me pls? I'm sorry for disturbing you too much but your reply is greatly appreciated. 4. Anonymous - yuo really need to do the analysis for each sub-sample (assuming you have enough observations, of course). Regarding logarithms vs. levels of the data. It's not really a matter of convenience - e.g., to get elasticities easily. It's a matter of whether the data are additively integrated, or multiplicatively integrated. I think I should prepare a separate post on this! 5. thank you for all your kind help :) 91. May I ask you an additional question. Is log transformation of a non-stationary series is also non-stationary and vice versa? Thank you! 1. Good question! You have to be careful. The real issues are: (i) whether the data are additively integrated or multiplicatively integrated; and (ii) how robust is the test you use to mis-specification between these two forms. The following paper gives a good overview and provides references to the earlier literature: 2. Thank you for providing me with this useful reference, Prof. Dave. :) 92. Dear prof. Giles, First, thanks for this very clear and interesting blog: it's very helpful and pretty scarce in the econometric field. Regarding granger causality test associated with cointegration models, some authors analyse short run as well as long run causality between the set of endogenous variables. I wonder how they can perform both tests using Eviews? I guess that long run causality corresponds to a granger test performed on the VAR model and the short run is the same using the VECM part on differenced series, but i'm not sure. Thank you very much in advance if you could explain this point. 93. Dear Prof. Giles, Thank you for your excellent blog. I'm employing the Y-T procedure for my paper and I use the data series which is likely to have a structural break. You said in your post, under the practical example that: "It looks as if there may be a structural break in the form of a shift in the levels of the series in 1975. We know that this will affect our unit root and cointegration tests, and it will also have implications for the specification of our VAR model and causality tests. This can all be handled, of course,..." Could you pls illustrate a bit more details? I know how to handle with unit roots (I use Zivot-Andrews) and cointegration tests (I use Gregory and Hansen, 1996) (actually you already guided how to deal with this issue in another post). But how about VAR modification and Y-T procedure? Could you pls elaborate a bit on how to do (esp. with Y-T) when the series is having breaks, or pls suggest me with some references. Thank you very much, Prof. Dave! 94. Dear Prof. Giles, Thank you for your generous teachings. On an off topic, may I know what are the ways or steps in conducting estimation for random walk hypothesis? And I am curious about the interpretation too. Once again, thank you and may God bless you. 95. Hi, Prof. Giles, Thanks for your interesting blogs. Your explanations really makes me more understanding and useful. Between this, I have a few questions would like to ask Prof. 1. You did mention that we must no difference the data under the VAR when want to do the TYDL procedure. How about to do VECM? Assuming all my variables are I(1), can i add difference the variables in the endogenous variables such as dlnint dlngdp to find the 'lag length criteria' in the VAR model before to do VECM? 2. If there is no cointegating factor in the VECM, we want to find short run relationship in the VAR model and assuming all the variables are I(1), so, we need to set up VAR model by using first difference variables (dlnint dlngdp) or straight away level form of data (Int gdp)? 3. I found when i generate my data, some of the variables show conflict between KPSS, PP and ADF test? For example, one of the variable's result for PP and ADF test are stationary in the I(1) while KPSS test show stationary in I(0). Is my result can be acceptable? Which result should i take? Thanks. Your reply is highly appreciated. Have a nice day!!! 96. Dear Prof Giles, what length of series using weekly data would you suggest testing for co-integration, ARMA model. Thank you. 1. I'm afraid there's no simple answer to this. What matters for the cointegration testing is the time-span that the data cover (number of years), rather than the number of observations. 97. Dear Prof Giles, I am studying the relationship between credit default swap(cds) spreads and credit ratings. I want to check if ratings have a impact on cds spreads over a long period of time. the problem is that there may be other things affecting cds spread besides ratings. also there is a possibility that because of certain pattern of cds spreads they may be rated high or low. in such a scenario, what could be the best way to analyze this? i am thiking of granger test. is this appropriate? what do i need to keep in mind while i do such an analysis on time series data. how do i make sure that i get robust results? thanks in advance. 98. There's not much more I can really add to the detailed discussion in the post. If you are working with quarterly or monthly data, be aware of the possibility of seasonal unit roots and/or cointegration. Plot your data and look carefully for any signs of structural breaks. If you have sufficiently long time-series, then you might test the "robustness" of your results by performing the causality tests using different sub-samples. 1. Dear Prof. Giles, Firstly, I am very impressed with your blog, particularly Tado and Yamamoto method of G-causality. The blog has many valuable academic materials which I never knew before even though I am an Secondly, as you mentioned about seasonal unit roots, I have an question of interpreting. I would like to use G-causality for the jobs follow people or People follow jobs phenomenon with quarterly data. Unfortunately, there is a (prior) structural break. As a result, I run HEGY and use critical value from Smith and Otero (1997), which allows for an exogenous change. The tests of annual and biannual (or seasonal) frequency were significant, while zero (or non seasonal) frequency was not. What is the meaning of zero frequency? Is it the same root as tested with ADF I understand the result that my data does not has seasonal unit root and I can use T-D method for finding the jobs-people movement. Is my understanding correct? Thank you very much, Narongsak K. 99. dear prof. i hope you will still post how to use Autoregressive Distributed Lag ARDL) bounds testing approach to investigating the existence of cointegration relationship among variables. dele 100. Yes - it's still on my "to do" list. :-) 101. Dear Prof, I'm a beginner in econometrics. I'm interesting to know what theorical(s) reference(s) in econometrics support the fact that Wald test statistic does not follow its usual asymptotic chi-square distribution under the null. 1. This problem arises if the data are non-stationary, whether they are cointegrated or not. See the references in the Toda and Yamamoto paper. 2. Dear Sir, and how about Granger causality test with panel data? Is there any standard techniques as in time series? Herimanitra R 3. You could check out Kònya, L. 2006. Exports and growth: Granger causality analysis on OECD Countries with a panel data approach, Economic Modelling, 23, 978-992. 4. Also, Jochen Hartwig had a paper in the March 2010 issue of "J. of Macroeconomics" that may help you: 102. Dear Prof, I am having a problems with some data I'm working with. I am trying to construct a VECM. The issue is one of the variables is stationary wen converted to natural logarithms whiles the others are nonstationary. Is it posibble to go on constructing a VECM with the data and what should be my next cause of action. 103. Hi Prof Giles, thank you for such an informative and generous blog. Although your steps are very detailed, I can't help but to wonder, in regards to Johansen Test, is there any formal approach on the specification of the deterministic components i.e a test/steps to determine which model (linear unrestricted vs linear restricted)? Again, thank you :) 1. Hi - you might take a look at the very helpful paper by Philip Hans Frances in "Applied Economics", 2001, vol. 33(5): 104. Thanks for clear explanations. I am trying to use T-Y procedure to study interdependencies between Russian stock index and macroeconomic factors using monthly time series. I discovered that there is serial correlation of VAR model's residuals on seasonal lag, i.e. 12. Even after I use p=12, there is still remain serial correlation in residuals. My first question is how to avoid the problem of serial correlation. Also I found the problem of multicollinearity. Specifically, gdp and oil price are highly correlated and hence the inclusion of both gdp and oil_price distorts the coefficients in VAR equations. Should I exclude one of the these variables? Another surprise is that in the end I obtained results that contradict economic sense: the Wald tests show that Russian stock index Granger cause oil prices (actually, it is reasonable to assume that oil prices Granger cause Russian stock index). At the same time it shows that oil prices Granger cause Russian gdp, which has economic sense. The third question is how to interprete economically nonsensical results? 105. Respected Prof. Giles, I am using 54 observations to test unit root with one structural break by Lee Strazicich method by RATS. For general to specific procedure what maximum lag should I consider? How the maximum lag can be determined? 1. Bipradas: There's no simple answer to this. Are your data annual, quarterly, monthly? If they are quarterly, then you probably need a longer max. lag than if they are annual, etc. 106. Professor Giles, Thank you for the great post, it was very instructive. I was just wondering if you need to include a normality test when performing the procedure. Thanks again. 1. Thanks for the comment. No - you don't need normality if all you are doing is testing for non-causality. The whole procedure I've outlined has only asymptotic justification. If you are also going to be testing for cointegration, using Johansen's methodology, then normality becomes more of an issue. 107. Professor Giles, I am trying to find the direction of causality between bilateral aid and bilateral trade for one country. It is a panel data as it is from 1987 to 2010 (annual) and each year has around 180 aid I was wondering how to run a granger causality test. I am having trouble finding the appropriate lag lengths as depending on the lag length, the result changes. Thank you for your instructional posts! 1. I'd choose the lag length by minimizing the Schwarz (Bayes) Information Criterion. This has the advantage of being a "consistent" selection criterion (c.f. AIC). 108. So many thanks for the explanation, i do my research using Toda Yamamoto Causality Test also. :) 109. Dear Prof Giles! I have to mention the hypotheses of the GC-test in a presentation. Is it enough to state: H0: X do not Granger-cause Y. and vice versa H0: Y do not Granger-cause X. H1: Not H0 or is there a more formal way to present the hypotheses? Thank you very much! 1. John: H0: X does not Granger-cause Y H1: Not H0 H0': Y does not Granger-cause X H1': Not H0' 2. Thank you very much! Your straightforward help is highly appreciated! 110. dear prof Giles.... regarding the output from Eviews under Step 6 (Johansen's Trace Test and Max. Eigenvalue Test both indicate the presence of cointegration between the 2 series)..i do not understand why the output for "Lags interval(for first differences) is '1 to 5'" because as u mention in step 5, the max length is p=6.. it is because we need to reduced the length lag? i have run the same step as T_Y steps. for my project,i get p=7 in order to remove serial correlation. when i run for "1 to 7", i'll get different no of cointegration, for example; trace test: 2 cointegrating max eigenvalue test: 1 cointegrating but, when i run for "1 to 6", i'll get the same no cointegrating for both trace and max test.. can u explain to me which one should i run for my project,,and if i need to run "1 to 6", what are the reasons need to be address? thank you for your feedback 111. Thanks for the question. As you note, the lag interval is "for first-differences", so this will always one less than the lag length selected originally. This arises because in the Johansen framework, a VAR with a first-differenced dependent variable is one of the VAR's that is estimated ("behind the scenes"). 1. thank you very much for your help and feedback regarding my problem :) 112. dear prof. hope you will still post how to ARLD bound test to test for cointegration. i have try all i could to use eviews 5 to estimate F test . thanks 113. I'm trying to get to it! 114. Dear Sir, your blog is quite interesting.. however i have one query, in case of VECM shall we conduct preliminary test (hetero, auto correlation and normality tests) on VECM or VAR model? 115. Sahar: testing for serial independence and normality is important - homoskedasticity is less of an issue, in general. 116. Dear Prof. Giles, thank you very much for this fantastic instruction! I am investigating the causality between media attention (A) and terrorism (T) for the period 1970-2010. I have set up a VAR-model, for which the optimal lag is calculated to be between 2 (SC) and 14 (AIC). I have decided to go for the SC criterium. The Wald-test gives me the following results for the linear model with 1 additional lag: T=a1*T.l1 + a2*T.l2 + a3*T.l3 + a4*A + a5*A.l1 + a6*A.l2 + a7*A.l3 a4 is not different from 0 (p= < 2.22e-16) a5 is not different from 0 (p= 0.72826) a6 is not different from 0 (p= 0.00080012) So I have a TY-causality for a4 and a6, but not a5. How should I evaluate this result in terms of overall TY-causality? Best regards 117. Christoph: You need to construct the joint test of a5=a6=0. If you can't reject this null, then there is no Granger causality from A to T. I agree with using SC over AIC - the former is a "consistent" selection procedure; the latter isn't. BTW - interpreting the p-values: a4: Reject H0: a4=0 a5: Cannot reject H0: a5=0 a6: Reject H0: a6=0 118. Dear Professor Giles, two more questions (just to be sure I understand this correclty): - If a have a (trend)stationary time-series, can I use the standard Granger-causality? - Testing for (non)stationarity, I get conflicting results. ADF and PP-test say that the TS is stationary, KPSS says it isn't. Which one should I believe? 1000 thanks! 1. Christoph: First question - Yes. 2nd question - this is a common problem, especially with moderate sized samples. Make sure that you have allowed properly for any structural breaks - this can add to the propblem of conflicting results. Bottom line - ask yourself, "which is the more costly mistake to make? Concluding that the series is stationary, when really it is non-stationary? Or concluding that it is non-stationary, when really it is stationary?" Usually, the first type of error is more costly - you'll end up with a meaningless mmodel. In the other case, you may up being conservative and unnecessarily "difference" data that are stationary. This over-differencing results in a series that is still stationary (although not I(0)) - that's nt such a bad thing, in general. 119. Dear Professor Giles, this is hopefully my last question: I'm testing whether the different segments of my VAR-model are well specified. If the lag order is high enough serial correlation can be eliminated. However, JB-test shows that the residuals are not normally distributed. In addition, the Harrison-McCabe test shows that heteroscedasticity is present. Is this a serious issue, or is mentioning it enough? Might bootstrapping solve the issue? Are there any other options? Thanks again, 1. Christoph: Sorry to be slow in responding to this one. The normality of the errors isn't needed when testing for Granger non-causality. The heteroskedasticity is worth mentioning, but is not really a serious problem. The usual non-standard asymptotic results associated with integrated and cointegrated data hold when the generating process is mildly heteroskedastic. Yes, bootstrapping is always a good option - it's likely to be a bit tedious in the context you are talking about, though. 120. Hello Professor, I'm an Economics student from the Philippines and I'm having a hard time determining what to do especially using eviews. :( Our study is about tourism-led growth hypothesis, if it is applicable to the Philippines. How many observations should there be for granger causality? 121. Dear prof, can u please explain how to carry out granger causality test in a panel data using eviews 6.0. Thank u. 1. I'll try to get to this in a later post. 122. Dear Professor Giles, first of thanks a lot for your informative blog. Here i need a little clarification, if both series (say X and Y) are I(0), then performing Granger causality with usual process and following T-Y procedure, will it give different results? If yes, then please clarify how to go for usual Granger causality test in this case as i have got idea to perform T-Y from your blog. Thanks and Regards 1. Nain: Thanks for your comment. If each series is I(0), then you should just estimate a VAR in the levels (or log-levels), not the differences, of the data. You would choose the maximum lag length in the usual way (AIC. SIC, etc.). The test for Granger non-causality in the usual way. Because the data are stationary, the Wald test statistic will have its usual asymptotic chi-square distribution. You shouldn't add any "extra" lags, as would be the case with the T-Y method. 123. Hi DG, Just found your blog and read much of it already - fantastic work! I am quite a novice at estimating VARs however, and, while working through my data with the help of your notes, i have a brief query. The initial lag length selection choice(say: [m], not [m+p]), in my data and as well as per your example, seems extremely subjective. For example; different IC give different recommendations, and when these different choices are chosen, the residuals are still serially correlated, heteroskedastic and non-normal. To rectify this problem, the lag length of my bivariate and trivariate VARs has to be increased up to ~20 periods to get 'well behaved' residuals. If the choice of the initial lag length [m] (as in your example; you jump from 2-->6?) is, as mentioned, so subjective, the addition of an additional augmented lag as per TY [p] seems almost trivial due to [m] being so arbitrary, no? Thanks again for all your great work on this blog by the way! Apologies if my naivety in your profession offends! 124. dave, in addition to the previous question which i submitted... why do you not consider the normality of the error terms in the unaugmented VAR? isn't this condition required to ensure that the distribution of the chi-sq in the augmented VAR reaches its asymptotic values? thanks again for your time, 1. C - thanks for the comment. The "ASYMPTOTIC" chi-square result does not require normality. 125. Dear Professor Giles, Thanks for sharing this information and for the perfect instructions! Thank God for a generous Professor like you. I've conducted the ARDL bounds testing for my current study. Now, I'm thinking of conducting TY-causality test too. Is it appropriate to compare the results from ARDL and TY in an article? I'm using the multivariate time series with small sample size. 1. Zai - thanks for the kind comment. Keep in mind that the ARDL test is a test for cointegration, while the TY test is a test for Granger non-causality. You can do both with the same data-set, but you are testing for different things. You'll also have to be very careful if you have a small sample size, as teh results asociated with both tests are valid only asymptotically. 126. Dear Professor Giles, Your instructions are very very usefull. I have two time series from 30 observations (quarter data, I(1)), and I wanted to explore causality. Does it correct use Toda-Yamamoto procedure in that case? If not, what minimum sample size should be? Can you propose me another method for testing causality? Thanks and Regards 1. Hi Milka, I have a similar issue, too. I wonder if you have made any progress with the T-Y procedure under small sample. I'd appreciate if you could share. 127. Milka: Your sample is very small, but I would still use the T-Y procedure. You could bootstrap the Wald test to allow for the small sample size. 1. Dear Profesor Giles, I got a question in the same line, if I have a small sample, how this impact in the unit root test? I'm using KPSS and ADF. Thanks in advance. 2. First, if you have a small sample, for the ADF test you should be using the usual MacKinnon response-surface critical values. It may appear that his crit. values are only for cointegration testing, but for his N=2 case, thay are fo the ADF test for a unit root. If you're using EViews, the exact finite-sample critical values are automatically used in computing the p-values. In the case of the KPSS test, several authors have published finite-sample critical values, including Hornol and Larsson (Econometrics Journal, 2000). In EViews only the asymptotic critical values are used, which is a pity. You'll probably also find the following paper helpful: 3. Thanks so much for your answer, professor. I have another question, Im doing a bootstrap to the wald test ,but i dont have pre-sample data. Do you know any method to compute it?, Now im using 0 in all pre samples values. Thanks. 128. Dear Professor Giles, Thank you for your response. Can I use first difference of real GDP as proxy of growth rate real GDP in Toda-Yamamoto procedure. Best regards, 129. Milka - the first difference of the LOGARITHM of real GDP will give you a measure of the GDP growth rate. 130. Dear sir, Q1. If my times series are co-integrated, but does have granger causality in both direction. - Is this siginifies that the series have some long run relation but not short run due to small sample size. Q2. If my time series are cointegrated with different order, can i use granger causality test or should i need to unrestricted VECM to find the long run and short run causality between the 131. Dear Sir, first of all, thanks a lot for your valuable blog which is of great interest and help. I have a question concerning the T-Y test: I have a VAR consisting of 5 series having (very) similar trends. They all are not trend-stationary but I(1). A test for cointegration arrives at the result that there is 1 cointegrating vector (with a restricted linear trend in this vector). This sounds plausible. All variables, however, appear to be weakly exogenous. That means – as far as I know – that the long run relationship does not provide any information in the EC model. How is this result to be interpreted? Next, I did the T-Y test to look for some Granger causality between the series. I found some significant relations what is, I think, consistent with what you wrote in point 13 of your original contribution. But my problem is that I want to show that there is indeed a long run relationship (common trend) between the 5 series but no “contagion” in the narrow sense. Is it possible to include a time-trend in the VAR to “account” for the common trend in the series. In this case, all significant Granger causalities disappear by using the T-Y procedure. May I conclude from that result that there is no short-run influence between the series? Thanks a lot for your helpful comments 132. dear proffesor am doing a dissetation entitled relationship between economic grwoth and the current account balance 1990 -2010(zimbabwean case) annual data. i hve a problem whereby in eviws 3.1 the variables are both stationery a I(O) that is for the unit root test but have been told to do cointergration test using the johansons test... and am not able to undersstand the results as most previous findings of other scholars have used the johansons test when they have at least one variable being at I(1) and above so what should i do 1. If all of your series are I(0) then there is no possibility for cointegration to be even defined, let alone exist. 133. Dear Professor Giles, I have got a question regarding the Johansen test for cointegration in Eviews. If the Johansen test is performed using five variables, according to the output obtained, the following number of cointegration relationships is possible: none, at most 1, at most 2, at most 3, at most 4. If there is an asterisk (*) behind any of those options (none*, at most 1*, at most 2*, at most 3*, at most 4*) there is the following text below this listing: “Trace/Max-eigenvalue test indicates 5 cointegrating eqn(s) at the 0.05 level.” Now my question is: In case of five variables, can there be ‘at most’ 4 or 5 cointegration equations? The reason behind this question is the following: If I try to estimate a VEC model with 5 research variables in Eviews and enter ‘cointegrating rank 5’ in the cointegration section, I get the following error message: “Invalid specification of number of cointegrating equations.” Maybe you can help me with that issue. Thank you very much in advance! Kind regards 1. Jan - regardless of the context, the maximum number of cointegrating relationships is always one less than the number of variables under consideration. If there are just two I(1) variables you either have cointegration; or you don't. In this case the maximum number of cointegrating relationships is just one. Also, in this case of 2 variables,if a cointegrating relationship exists, it is unique. (This is NOT the case when there are 3 or more I(1) variables.) 2. Dear Professor Giles, Thank you very much for your quick reply! Unfortunately, I am still a little bit confused when it comes to the EViews Johansen output. For a case with five I(1) variables, the maximum number of cointegrating relationships would then be four. Is the interpretation of the Johansen output below correct? If so, how should I interpret the last case? If not, where`s my mistake? At most 1 At most 2 At most 3 At most 4 =>no cointegration relationship At most 1 At most 2 At most 3 At most 4 =>1 cointegration relationship (= cointegrating rank 1) At most 1* At most 2 At most 3 At most 4 =>2 cointegration relationships (= cointegrating rank 2) At most 1* At most 2* At most 3 At most 4 =>3 cointegration relationships (= cointegrating rank 3) At most 1* At most 2* At most 3* At most 4 =>4 cointegration relationships (= cointegrating rank 4) At most 1* At most 2* At most 3* At most 4* 3. I have no idea where this "output" came from, of course. Point still remains, the maximum number of cointegrating realtionships you can have is one less than the number of integrated 4. This is not an original output from any econometric software. I`ve tried to set up an example, but that might have been confusing. The output below is from EViews and has been produced for five variables which are all I(1): Series: OIL QMT VALUE EXRA RGDP Lags interval (in first differences): 1 to 15 Unrestricted Cointegration Rank Test (Trace) Hypothesized Trace 0.05 No. of CE(s) Eigenvalue Statistic Critical Value Prob.** None * 0.653482 283.0715 76.97277 0.0000 At most 1 * 0.523536 173.9100 54.07904 0.0000 At most 2 * 0.374562 97.54966 35.19275 0.0000 At most 3 * 0.271557 49.21152 20.26184 0.0000 At most 4 * 0.148653 16.57635 9.164546 0.0017 Trace test indicates 5 cointegrating eqn(s) at the 0.05 level * denotes rejection of the hypothesis at the 0.05 level **MacKinnon-Haug-Michelis (1999) p-values My question with regard to the output above is as follows: If there can only be a maximum of 4 cointegrating relations, why does the output include the following remark: “Trace test indicates 5 cointegrating eqn(s) at the 0.05 level” In other words: Why does the test indicate 5 cointegrating equations if there can only be a maximum of 4? 5. This makes no sense I'm afraid. Why not email me your EViews workfile (dgiles@uvic.ca) and I'll take a look. Incidentally, your lags interval of 1 to 15 seems excessive. I'll look forward to hearing from you. 6. If you check p.367 of vol 2 of the User's Guide for EViews 6, you find the following. The case of k cointegrating relations (k=5) in your case, is taken to correspond to the situation where all of the series are in fact stationary - none of them have unit roots. So, you have a conflict between the results of your unit root testing and those of your cointegration testing. There could be several reasons for this - e.g., structural breaks in the data; a very short sample size, etc. 134. Dear Professor Giles, Thank you for your answers earlier. I have another question. In the case Toda-Yamamoto procedure when I have time series I(0) and I(1), haw I can know does effects earlier values series Y have positive or negative impact on the current value Y? Best regards, 135. Milka - I'm not entirely sure what you are asking here. Can you please clarify? 136. Dear Professor Giles, How can I interpret results Bank Deposit does not Granger cause GDP-5.855410 p= 0.0535 GDP does not Granger cause Bank Deposit-5.723306 p= 0.0572 Thanks again for your time 137. Dear Professor Giles I just had a referee report concerning a paper submission. I followed your methodology for Granger causality. I am focused on the relationship between 2 variables for the US and EU and have a number of other variables as controls. I presented the results of the procedure you described concerning Granger-causality on pairwise tests. However the referee states that "it is known that it is not optimal to test for causality in a bivariate model, particularly if there is an auxiliary variable that influences the two variables in the bivariate system". I am a bit surprised by this comment as I am not to test for causality per se, but actually checking if Y has information with respect to X. Also, it is a purely forecasting paper, with no structural model behind. Following the referee's reasoning, there is no reason why I should exclude a priori any variables from the testing. Should I end up having to estimate a VAR with pairwise testing and all other variables as exogenous variables? My data set consists of 11 variables for the EU and 11 variables for the US.. Would anything change in the testing procedure above? Thanks a lot for any feedback you may provide. And congratulations on your service to the community with this blog. I wish more people would follow your example :) 1. Sorry to hear about your rejection. :-( First of all, if there are additional variables that might cause X or Y, or if X might cause Y indirectly, through some other variable Z, then this really should be taken into account. One thing I'm not clear about, from your description. Are you interested in testing for causality between one variable in the US, and one variables in the EU, with lots of control variables in the picture? Or are you interested in testing, pairwise (11 times) between a US variable and its counterpart in the EU? Depending on which you're interested in, this will affect how you should set up the model and proceed from there. For example, if it's the first of these two cases, then you'd presumably estimate a 2-equation VAR, for the US and EU variables of interest, with all of the other covariates added into the each of the equations. These additional exogenous variables might enter with or without lags. Then, you'd undertake the usual T-Y testing procedure. Perhaps you could elaborate a little on my questions above, or email me directly at dgiles@uvic.ca 138. Prof. Giles, Following your post dated Oct.18,2012, if X might cause Y through another variable Z, does that mean that I have to do granger test on X to Z, first. Then, do another granger test on Z to Y? Or just put Z as exogenous variable in the VAR model? 1. It would probably be better to estimate a three-equation VAR, and then do your causality testing within that framework. 2. Thanks Prof. Giles. I think 3-equation VAR is ok because I do not want to interpret the link between X and Y as partial causality. 139. Dear Professor Giles, I think I have managed the TY-method now fairly well. What if I find different strucutral breaks (Bai-Perron) in the SUR model for each equation? Is it possible to estimate each equation seperately for each resulting segment? Thank you very much again, 1. Christoph - yes, if you have enough degrees of freedom to do that. Otherwise you could consider a careful use of dummy variables for the breaks. 140. Thanks you! That's exactly what I had in mind. I might actually do both. I think BP is superior because it can handle multiple endogenously determined breaks. 141. Dear professor, I'm unsure if you've already answered this (it's hard to go through so many comments), but I was wondering if you had to check the VAR residuals, and that they fall inside the confidence interval when you graph them, to asses if the model is properly specified. Btw, great blog, keep up the good work, 1. Alex - yes, a thorough examination of the residuals, testing them for serial correlation, outliers, etc, is very important here, as always. 142. Dear Professor: Would you mind telling me more how to include the multiplicative dummy variable for granger causality test? I'm not quite sure on how to allocate it in either dependent or independent side since I'm testing VAR. 143. thanks for great explanation Kushneel (Fiji) 144. Dear Dr. Giles, I think a good next post on this topic would be "instantaneous causality" within the T-Y framework. I see that it hasn't been covered yet on your blog and receives only a mystifying treatment in "New Time Series Analysis". 145. Dear Prof I want to examine long run relationship between 14 stock market indices through JJ cointegration. However, I found that all the series are stationary (I(0)). I have run VAR system. So how can I proceed then? 1. If all of the series are I(0), they can't be cointegrated. 146. Dear Prof Dave, I am investigating the linkages among financial deepening, trade openness and economic growth. Can you help me out with the STATA commands for the following: 1. Trivariate Panel Granger Causality Test 2. Choi (2006) Unit Root Test 3. Fisher-Johansen Panel Cointegration Test (Maddala and Wu;1999). I should be very grateful for your help. 147. This comment has been removed by a blog administrator. 148. Bernard - I'm not really a STATA user. However, I've posted your request today - perhaps you'll get some feedback from it. 149. Dear Prof. Giles, When analyzing g-causality where one of the two variables is a price level, does it matter if the price is in nominal or real terms? Which is preferred in the literature? Thanks so much for educating practitioners and students around the world! 1. Thanks for the comment. It really doesn't matter from the viewpoint of the Granger causality testing. The important thing is that you test with the variable in the form that is consistent with the economic hypothesis that you are interestd in. 150. Prof Giles, I have a few queries. 1. How do we interpret Granger non causality Block Exogeneity test with Three endogeneous variables? 2. While undertaking the serial correlation test how should we use p values along with LM statistic in the example that you have provided above? 3. what is the use of inverted AR roots and how is it interpreted? 151. Dear Prof. Giles, Is there a way to determine the magnitude of causality and direction such as negative or positive between two variables? Thank you!
{"url":"https://davegiles.blogspot.com/2011/04/testing-for-granger-causality.html","timestamp":"2024-11-03T09:51:29Z","content_type":"text/html","content_length":"820443","record_id":"<urn:uuid:2c5ec8c6-315f-46ef-b7ab-a86c1d8bf28d>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00488.warc.gz"}
ESP Biography DAVID GRABOVSKY, Columbia senior studying math and physics Major: Physics, Mathematics College/Employer: Columbia University Year of Graduation: 2019 Brief Biographical Sketch: Not Available. Past Classes (Clicking a class title will bring you to the course's section of the corresponding course catalog) M772: All of Linear Algebra in Splash Fall 2018 (Oct. 28, 2018) This class will be my attempt to do the entirety of an advanced undergraduate course in linear algebra in less than two hours. We will develop the theory of finite-dimensional vector spaces, starting with vectors and linear combinations and moving on to linear transformations, isomorphisms, and the change of basis. Then we'll introduce eigenvectors and eigenvalues and discuss inner products as an excuse to define linear functionals and the dual space. Finally, we will finish off with the crowning glory of linear algebra, the spectral theorem. The goal of the class is to show you a wild ride through linear algebra, and to "give away the trade secrets" of a beautiful subject. M773: False Theorems and Fake Proofs in Splash Fall 2018 (Oct. 28, 2018) In this introductory “math” class, we will “prove” some entirely false claims in surprisingly convincing ways. For example, we'll show that 1 is the largest natural number, but that all natural numbers are equal to zero. Come find out why the Pythagorean theorem is a lot simpler than you thought (spoiler: $$c = a+b$$), why all functions are equal to zero, and why $$1 = 2$$. (I'll offer as many proofs of this "fact" as I have time for!) The emphasis will be on making math ridiculous and funny, as well as on discovering what really makes standard methods of proof tick. Come ready to S777: Poetry for Physicists in Splash Fall 2018 (Oct. 28, 2018) In this class, I will wax poetic on the beauty of physics by writing down an inordinate number of equations. Forgoing technical derivations in favor of physical argument, I will try to describe the glorious apparatus of modern theoretical physics, from classical mechanics to the quantum theory of fields. We will start by scrapping Newton's laws (because they're boring), and we'll spend most of the class discussing two beautiful reformulations of classical mechanics. Among our poetic vistas will be Hamilton's least-action principle, the Euler-Lagrange equations of motion, Noether's theorem relating symmetries to conservation laws, canonical conjugate momenta, Hamilton's equations of motion, Poisson brackets, and Liouville's theorem. Time permitting, we may also discuss classical field theory or symplectic geometry. At the end of class, I will try to convince you that classical and quantum physics are really the same, and that commutators, Heisenberg's uncertainty principle, and path integrals are all really part of the same poetic physical structure. M649: All of Linear Algebra in Splash Spring 18 (Mar. 31, 2018) This class will be my attempt to do the entirety of an advanced undergraduate course in linear algebra in less than two hours. We will develop the theory of finite-dimensional vector spaces, starting with vectors, linear combinations, and bases. We'll talk about the kernel and image of linear transformations before moving onto isomorphisms and changes of basis. We will introduce eigenvectors, discuss inner products, linear functionals, and the dual space, and finish off with self-adjoint operators and the spectral theorem. The goal of the class is to show you a wild ride through linear algebra, and to "give away the trade secrets" of a beautiful subject. M650: False Theorems and Fake Proofs in Splash Spring 18 (Mar. 31, 2018) In this introductory “math” class, we will “prove” some entirely false claims in surprisingly convincing ways. For example, we'll show that 1 is the largest natural number, but that all natural numbers are equal to zero. Come find out why the Pythagorean theorem is a lot simpler than you thought (spoiler: $$c = a+b$$), why all functions are equal to zero, and why $$1 = 2$$. (I'll offer as many proofs of this "fact" as I have time for!) The emphasis will be on making math ridiculous and funny, as well as on discovering what really makes standard methods of proof tick. Come ready to S652: Poetry for Physicists in Splash Spring 18 (Mar. 31, 2018) In this class, I will wax poetic on the beauty of physics by writing down an inordinate number of equations. Forgoing technical derivations in favor of physical argument, I will try to describe the glorious apparatus of modern theoretical physics, from classical mechanics to the quantum theory of fields. We will start by scrapping Newton's laws and postulating a principle of stationary action to govern the dynamics of classical particles. After writing down the Euler-Lagrange equations of motion, we will discuss the truly poetic theorem of Noether relating symmetries to conservation laws. We will then rewrite mechanics in the Hamiltonian formalism and pause to discuss phase space. Next, we will change gears and give the basic postulates of quantum mechanics. We will solve the harmonic oscillator, and then discuss unitary dynamics in the Schrödinger and Heisenberg pictures. Having laid down our "basic" formalism, we will start the second hour by making the transition to classical field theory, where Noether's theorem reappears more powerful than ever. We will embark on the project of quantizing the free scalar field, which will turn out to satisfy the Klein-Gordon equation. Time permitting, I will also talk about interacting quantum field theories, Feynman diagrams, and the path integral formulation of QFT. M568: All of Linear Algebra in Splash Fall 2017 (Nov. 04, 2017) This class will be my attempt to do the entirety of an advanced undergraduate course in linear algebra in less than two hours. We will develop the theory of finite-dimensional vector spaces, starting with vectors, linear combinations, and bases. We'll talk about the kernel and image of linear transformations before moving onto isomorphisms and changes of basis. We will introduce eigenvectors, discuss inner products, linear functionals, and the dual space, and finish off with the self-adjoint operators and the spectral theorem. The goal of the class is to show you a wild ride through linear algebra, and to "give away the trade secrets" of a beautiful subject. After this class, consider taking my "All of Quantum Mechanics" sequence for a spectacular application of the ideas developed in this class. M576: False Theorems and Fake Proofs in Splash Fall 2017 (Nov. 04, 2017) In this introductory “math” class, we will “prove” some entirely false results in surprisingly convincing ways. For example, we'll show that 1 is the largest natural number, but that all natural numbers are equal to zero. Come find out why the Pythagorean theorem is a lot simpler than you thought (spoiler: $$c = a+b$$), how all functions integrate to zero, and as many proofs that $$1 = 2$$ as I have time for. The emphasis will be as much on making math hilarious as on introducing the methods of “proof” that make these results tick—and why they’re wrong. After this class, consider taking Theo's "Things you Think should be False but Aren't" for some more serious examples of the counterintuitive element in mathematics. S581: All of Quantum Mechanics (Part I) in Splash Fall 2017 (Nov. 04, 2017) Part I of an intense introduction to one of the most ridiculous and ridiculously beautiful physical theories ever invented. We will cover Hilbert space and quantum states, Dirac's bra-ket notation, observables and hermitian operators, quantum measurement and collapse, the spectral theorem, probabilities and the Born rule, and commuting observables. We will conclude with the Stern-Gerlach experiment as a stunning demonstration of the quantum formalism. S582: All of Quantum Mechanics (Part II) in Splash Fall 2017 (Nov. 04, 2017) Part II of an intense introduction to one of the most ridiculous and ridiculously beautiful physical theories ever invented. We will start with the double-slit experiment, cover quantum wavefunctions, position and momentum eigenstates, the canonical commutator identity, and conclude with a derivation of the Schrodinger equation. This class is not to be taken without first taking Part I of this sequence! After this class, consider taking Ben Church's and Ryan Abbott's "Relativistic Quantum Theory" for a serious introduction to relativity of quantum mechanics. M501: False Theorems and Fake Proofs in Splash Spring 2017 (Mar. 25, 2017) In this introductory “math” class, we will “prove” some entirely false results in surprisingly convincing ways. For example, we’ll show that 1 is the largest natural number, that it’s actually equal to both -1 and 2, and that all natural numbers are a whole lot less than a million. Come find out why the Pythagorean theorem is a lot simpler than you thought (spoiler: $$c = a+b$$), why all infinite sets are the same, and more. The emphasis will be as much on making math hilarious as on introducing the methods of “proof” that make these results tick—and why they’re wrong. M502: All of Linear Algebra in Splash Spring 2017 (Mar. 25, 2017) This class will be my attempt to do the entirety of an undergraduate course in linear algebra in less than an hour. We will rapidly develop the theory of finite-dimensional vector spaces, starting with vectors and the notions of linear combination, basis, and dimension. Moving on to linear transformations, we will discuss the kernel, image, rank, and nullity of linear maps before diving into determinants and invertibility. From there, we’ll talk about change of basis, introduce eigenvectors, and finish off with self-adjoint operators and the mathematical fireworks of the spectral theorem. The goal of the class is to introduce a new way of thinking about mathematical structures, and to show you a wild ride through a beautiful subject. M503: Groups and Representations in Splash Spring 2017 (Mar. 25, 2017) Group theory and linear algebra are two of the most stunningly beautiful areas of mathematics, and representation theory is what happens when you put them together. In this class, we will start by introducing groups and some cool stuff you can do with them (e.g. homomorphisms, group actions, etc.) before moving to vector spaces and discussing invariant subspaces and maps. We'll define representations, talk briefly about irreducibility, and then prove the cute but powerful lemma of Schur. If there's time, we can then move on to direct sums and discuss the decomposition of representations into irreducible pieces. S413: Quantum Mechanics I: A Mathematical Perspective in Splash Fall 2016 (Nov. 05, 2016) In this class, we will attempt to build quantum mechanics up from the ground. We start with the mathematical formalism that describes wave functions, the strange inhabitants of an infinite-dimensional world known as the Hilbert space $$L^2$$, as well as a special class of the linear operators that roam this space. We will go on to discuss the physical meaning of this formalism, along the way exploring probability and the measurement problem. And for the grand finale, we will prove Heisenberg's uncertainty principle and show, using the spectral theorem, that no two incompatible observables admit a simultaneous eigenbasis. While this class is not for the faint of heart and relies on some pretty powerful mathematical machinery, it does not assume any previous experience with quantum mechanics, or even physics for that matter. We have a lot to cover and will be moving fast, but I hope you will enjoy the ride! This class is the first in a two-part sequence of quantum mechanics. The second class approaches quantum mechanics from a completely different perspective and covers completely different material focused on the Schrödinger equation, its origins, solutions, and limitations. Students are encouraged to take both classes, but each one is self-contained and can be taken independently. S414: Quantum Mechanics II: A Physical Perspective in Splash Fall 2016 (Nov. 05, 2016) In this class, we will attempt to invent quantum mechanics. We begin with empty space, governed classically by Maxwell's equations. We will derive the wave equation and start shooting electromagnetic plane waves at a conducting surface, discovering (with Einstein's help) the photoelectric effect. On a whim, we will reformulate the wave equation in terms of its energy and momentum; by considering a massive particle from this perspective, we will find ourselves face to face with the Schrödinger equation. For the rest of the class, we will discuss some consequences, solutions, and limitations of the equation. As a grand finale—time permitting—we will break free of Schrödinger's non-relativistic limit and discover the Klein-Gordon equation, which describes all relativistic spinless particles. This class is certainly not for the faint of heart, and relies on powerful mathematical machinery in addition to a considerable amount of classical physics. We have a lot to cover and will be moving fast, but I hope you will enjoy the ride! This class is the second in a two-part sequence of quantum mechanics. The first class approaches quantum mechanics from a completely different perspective and covers completely different material focused on the mathematical formalism governing quantum mechanics. Students are encouraged to take both classes, but each one is self-contained and can be taken independently. M312: How to Break Math in Splash Splash Fall 2015 (Nov. 14, 2015) You read the title correctly. In this fast-paced mathematical adventure, we will start by destroying the notions of truth and self-consistency in mathematical systems through Russell's Paradox and Gödel's Incompleteness Theorems. Next, we will do away with counting and size by measuring infinities against each other. Finally, we will gather what's left of mathematics just to demolish it, ending class with an astounding proof that a single ball can turn into two. Needless to say, this class is not for the faint of heart. There is a good bit of mathematical abstraction, but if you are willing to join me in deconstructing the foundations of everything we know, I think you will enjoy the ride and perhaps get a glimpse of the true beauty of mathematics.
{"url":"https://columbia.learningu.org/teach/teachers/StressedTensor/bio.html","timestamp":"2024-11-11T07:49:42Z","content_type":"application/xhtml+xml","content_length":"31031","record_id":"<urn:uuid:6979921a-a959-4be7-bf85-018cb129c635>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00478.warc.gz"}
Re: IFX bug taking the sqrt of 0.0? 12-08-2023 04:52 AM I'm encountering what I think has to be a compiler bug in ifx version 2022.2.0. I don't see any way around it. I don't have a minimum example that demonstrates the issue, but in our main code base, I've added some debug code that looks like this: print*,(var1(1)**2 + var2(1)**2) print*,0.0 == (var1(1)**2 + var2(1)**2) print*,sqrt(var1(1)**2 + var2(1)**2) 0.0000000E+00 0.0000000E+00 forrtl: error (65): floating invalid It crashes on the line trying to take the square root of zero even though taking the square root of zero should be fine, and I have no problem taking the sqrt of literal zeros. Is/was there a known bug in taking roots of zero variables? (I can't reproduce it in a minimal case, so it must require some complexity) I've requested that IT upgrade our version of IFX but in the meantime I'm wondering if I can stop working on trying to figure out what's going on here. I was getting a crash elsewhere in our code trying to take sqrt(0.0), but I added some code to bypass it in the zero case. Now that I'm encountering the same problem somewhere else, I'm thinking it must be the compiler--right? 12-08-2023 04:22 PM 12-08-2023 04:13 PM 12-08-2023 04:18 PM 12-08-2023 04:22 PM 12-12-2023 08:05 AM 12-12-2023 08:45 AM 12-12-2023 08:57 AM
{"url":"https://community.intel.com/t5/Intel-Fortran-Compiler/IFX-bug-taking-the-sqrt-of-0-0/m-p/1553161/highlight/true","timestamp":"2024-11-13T02:19:10Z","content_type":"text/html","content_length":"322333","record_id":"<urn:uuid:1710f872-b7b3-4b90-b4f5-8293151bc34c>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00095.warc.gz"}
Lorentz symmetry group, retardation and energy transformations in a relativistic engine In a previous paper, we have shown that Newton’s third law cannot strictly hold in a distributed system of which the different parts are at a finite distance from each other. This is due to the finite speed of signal propagation which cannot exceed the speed of light in vacuum, which in turn means that when summing the total force in the system the force does not add up to zero. This was demonstrated in a specific example of two current loops with time dependent currents, the above analysis led to suggestion of a relativistic engine. Since the system is effected by a total force for a finite period of time this means that the system acquires mechanical momentum and energy, the question then arises how can we accommodate the law of momentum and energy conservation. The subject of momentum conservation was discussed in a pervious paper, while preliminary results regarding energy conservation where discussed in some additional papers. Here we give a complete analysis of the exchange of energy between the mechanical part of the relativistic engine and the field part, the energy radiated from the relativistic engine is also discussed. We show that the relativistic engine effect on the energy is 4th-order in^1c and no lower order relativistic engine effect on the energy exists. • Electromagnetism • Newton’s third law • Relativity Dive into the research topics of 'Lorentz symmetry group, retardation and energy transformations in a relativistic engine'. Together they form a unique fingerprint.
{"url":"https://cris.ariel.ac.il/en/publications/lorentz-symmetry-group-retardation-and-energy-transformations-in--2","timestamp":"2024-11-03T19:15:06Z","content_type":"text/html","content_length":"56426","record_id":"<urn:uuid:21bca760-b871-47c1-9f65-d8aec5fa4fc2>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00623.warc.gz"}
Productive Struggle and It's Importance Productive Struggle and It’s Importance Productive struggle time in math is not natural. Allowing children to struggle with something goes against every instinct I have. When I see a child having difficulty, every pore in my body wants to take over and fix the problem. However, as a mom and as a teacher, I know that is exactly what I should not do. Struggle time is an essential, yet often skipped, part of the learning process. Struggling with a new concept of very disconcerting for students, especially your high achieving perfectionist. I can’t tell you how many times I’ve seen students completely melt-down or shut-down over feeling stuck or defeated academically. This is obviously a negative reaction that I want to prevent and avoid. At the same time, I don’t want to eliminate the struggle from my students. Instead, I try to teach students how to persevere and problem solve to overcome the struggle. The Research Behind Productive Struggle Time I’m one of those people who has to see the research, so I’ve included just a tidbit of what I’ve learned over the years. If that’s not your thing, you can skip ahead to the actual classroom implementation portion of this blog post. Steven Katz and Lisa Ain Dack write in their book, Intentional Interruption: Breaking Down Learning Barriers to Transform Professional Practice: “The experience of cognitive discomfort is not an unfortunate consequence of new learning: it is an essential prerequisite of new learning.” I’m also taking a math course from Jo Boaler, who is a Stanford professor and math leader. In her course, she explains that “mistakes and struggle make your brain grow.” The National Council of Teachers of Mathematics (NCTM) policy document, Principles to Actions: Ensuring Mathematical Success for All notes that “an effective teacher provides students with appropriate challenges, encourages perseverance in solving problems, and supports productive struggle in learning mathematics.” Apparently the official term for this is “productive struggle.” This productive struggle should be less about frustration and more about trying something new. For the struggle to be productive, the mathematics must be within reach of students. Also, the math the students are wrestling with must be directly related to the mathematical goals of the lesson or unit. Classroom Implementation I’ve found that defining the struggle makes it much less intimidating and overwhelming for my students. In my classroom we call it “Struggle Time”. I explicitly teach a mini lesson on productive struggle time that is a great starting place for teaching students how to embrace the struggle. What is Productive Struggle Time? We begin the lesson by discussing what productive struggle time is, because most of my students have never heard the term before. I ask for volunteers to share a time when they were learning something new that they initially struggled with. While adults may not always enjoy volunteering this type of anecdote, I almost always have every student in the classroom want to share a story. After listening to a few examples, I ask students to describe how they felt as they struggled. Most students will acknowledge that they felt upset or frustrated. I push even further and ask students why they think they feel so upset when they struggle while learning something new. After listing to a few responses, I lead in to a discussion on how we’ve been conditioned to think that when we struggle with something we’re not smart, but that absolutely is not true. I use the following lead it to introduce the idea of struggle time: When babies first learn to walk, they fall all the time. Does that mean they’re not smart or will never be able to walk correctly? Of course not! They are simply in the process of learning how to walk. What about young children learning how to read? If I walked into a kindergarten class and handed a student a chapter book could s/he read it? No. Does that mean they’re a bad reader? No! They simply have to learn how to read, and it may take a year or two, but they can and will eventually be able to read that chapter book. Boys and girls, struggling is part of the learning process. We can’t escape it or run away from it. This year during math workshop, you’re going to experience what I call struggle time. Productive Struggle Anchor Chart After introducing the idea of productive struggle time, I work with the class to create an anchor chart that defines and explains this portion of math workshop. This is a time when students get stuck solving a problem. I want my students to know that they will all struggle with something during the year. I explain that if they’re not struggling, I’m not doing my job as a teacher. If they already know everything that I’m teaching then they’re not learning anything new. This is the time when real learning occurs. Yes, I can tell a student how to solve a problem, but the student will only be copying my steps or a procedure. They’re not making sense of a concept and problem solving. In my classroom, our official “struggle time” is at the start of students’ work time. I begin with a mini lesson in which I teach a skill and explain directions, so this is not a time where students should experience a true “struggle time”. During this part of instruction, I’m constantly offering support and clarification. Instead, “struggle time” should begin when students start their workshop task. The presence of absence of the struggle is often a litmus test for the task itself. If every student in my classroom can quickly complete the day’s activities, they more than likely the task wasn’t rigorous enough for that group of students. During students’ first five minutes of work time, I won’t help students or clarify how to solve the problem. I always make a big production of that “Now, I know that sounds horrible! My teacher won’t help me with my math! But now do you all think I’m going to go sit at my desk and update my Facebook status? No way! Guess what I am going to be doing? That’s right! I’m watching you to see how you attack the problem. I’m listening for what strategies you’re using and what misconceptions you may have. Now do you think I’ll going to make you struggle the WHOLE time? No way! After about five minutes, I’ll step in and help you but, you’ll be surprised to see how much you can do own your own!” After our “struggle time” is over, I will help students. I try hard to guide students with questioning rather than telling, but that is something I’m still working on. As the facilitator, it’s essential for me to provide students with tasks that are in that the zone of proximal development for my students. This zone lies between what students can do and what is just out of reach. If a task is too simplified for students then there is no challenge to overcome. On the other hand, if the task is too difficult then the struggle is pointless, because the concept is not accessible to students. This is the reason I’m so picky about what I use for my math instruction. What Do I Do During Struggle Time? I then guide the lesson into teach students what they can do when they get stuck during math work time. Rather than me telling students what strategies to use, I have my students brainstorm and tell me what strategies they can use. We always discuss the importance of attitude. I make a big, goofy demonstration about the difference between a negative attitude and positive attitude and talk about how important it is to remain positive. I also make sure students know how to use the resources around the classroom. They have their Math Reference Notes, Anchor Charts, Word Wall Cards, and sample student work to refer to, so I want them to actually use those resources. I arrange my students into groups of four, and encourage them to talk to their group. During our work time, I want my students to talk to each other. Of course, I expect them to use accountable talk, which is a whole other lesson. I also talk about how it’s not necessary to solve the whole problem at one time. It’s okay to break the problem into parts and to take it one step at a time. Students often want to see the whole plan before they begin working, but that’s not always possible with complex tasks. I tell my students that whenever I write anything my first sentence is always my most difficult sentence, because getting started is hard. Throughout the year, I will teach problem solving strategies that my students will be able to apply to math workshop and productive struggle time. When I refer to problem solving, I’m not talking about general word problems or even multi-step word problems. Instead, I’m talking about problem solving tasks. For example, in the task below students problem solve to determine how much pizza each child can receive. This is not an easy task for the beginning of a fraction unit, and most students will need lots of trial and error, but with time and problem solving they can solve the problem and by solving the problem develop a deeper fraction number sense. What NOT TO Do During Struggle Time I end the lesson by discussing what not to do during productive struggle time. My students suggested pout or cry, so I added that to the anchor chart. I added run away to the chart. My students laughed, because they thought I was being literal. But I explained I often see students avoiding challenging work by going to the restroom, asking to to get water or refill water bottles, asking to go to the nurse, etc. I see that frequently, and I think my students are surprised that I notice. The final and most important thing for students to not do is to just sit there. Nope. That’s not okay. I explain that I would MUCH rather see them try and fail that not try at all. After our “struggle time” is over, I don’t want someone to raise their hand and say, “I don’t get it.” after making no attempt. We talk about how students must show an effort to get started and to overcome their challenges. We talk about how sitting and waiting to be rescued doesn’t help the learner. As teachers when we rescue students, we’re communicating that we don’t believe the student can solve the problem. Instead, after the initial five minutes, I ask guided questions that hopefully help students get started. If even then I see that a student is completely confused and not making any forward progress, I will typically work with that student individually or in a small group if there are other students in the same situation. Class Environment For productive struggle to be effective it is essential to have the right classroom environment. Students must feel safe to take risks. Students have to be taught that it’s okay to try and to fail. They must shift their mindset to thinking that failure can be a learning tool, not an end result. Students must also feel safe and comfortable to share with their classmates. They need to know that their ideas and suggestions won’t be laughed at or belittled. It’s also necessary to take away the fear of making a bad grade. While I do give summative assessments, a learning task is not one that should be graded for the purpose of recording a grade. It’s fine to use the tasks as formative assessments, but I’ve found students feel much freer to accept change when the threat of a failing grade is removed. This also requires a change in the mindset of teachers. Many feel that if there is not immediate success the teacher has failed, and we have to change that thought pattern. This philosophy of teaching has tremendously reduced the amount of learned helplessness I see from my students. It’s dramatically reduced the stress level of my high achieving students, and encouraged everyone to be willing to learn and grow as mathematicians. You can read more about teaching elementary math here. There is a TON of great content for you! 2 thoughts on “Productive Struggle and It’s Importance” 1. Christine Duprow Super article, I cannot wait to use this! I just finished the week of inspirational math from Jo so this will go hand in hand. Thanks so much. I copied below perhaps a typo? I’m thinking you meant to say “must” me within students reach. 🙂 For the struggle to be productive, the mathematics mush be within reach of 2. Kim I love this. I have been talking with my students about productive struggle and this will help me tremendously with some visuals. Leave a Comment
{"url":"https://www.ashleigh-educationjourney.com/productive-struggle/","timestamp":"2024-11-02T01:48:19Z","content_type":"text/html","content_length":"248228","record_id":"<urn:uuid:24ab1027-4915-4179-8a71-be6cecc1ae49>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00119.warc.gz"}
Addition Facts Worksheets This is a collection of addition facts worksheets -- adding two single digit numbers. These are classic worksheets for homework, math centers, and sub-plans, and general practice. Take note that this is a subset of our addition worksheets packet. What You'll Get You will receive several printable files. These are 2-page files with worksheets and answer keys. They are organized into folders for easy access. Just pick a worksheet, print and you're ready to There are 11 exam types, each with 15 worksheets. 1. Add the numbers (Aligned). The addends are aligned vertically 2. Add the numbers (Horizontal) and write their sum on the box provided 3. Color the correct answer (2 choices) 4. Color the correct answer (3 choices) 5. Choose the Correct Answer from a List 6. Match two Columns – Addends vs Sum 7. Find the Missing Addend 8. Find the Missing Added or Sum 9. Adding 3 Numbers. Add the three numbers. Write your answer on the space provided. 10. Add 3 numbers, 2 sums. Add the first two numbers, and then add the result to the third number. We bundled this with other products so you can get discounts. Just click any of the bundles below for more details.
{"url":"https://www.printablesandworksheets.com/products/140-addition-facts-worksheets","timestamp":"2024-11-06T11:45:00Z","content_type":"text/html","content_length":"102636","record_id":"<urn:uuid:ae3599b7-9233-458f-81f4-70705a1954b1>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00520.warc.gz"}
Hodge Theory Mohammed Abouzaid A tropical approach to the Gamma conjectures The Gamma conjectures are a collection of conjectures relating the asymptotic behaviour of Iritani's integral structure on quantum cohomology to classical topology via a twisted square root of the Todd class (the Gamma class). Via mirror symmetry, the conjectures can be related to periods of the mirror varieties. I will present joint work with Ganatra–Iritani–Sheridan which reproves the most basic of the Gamma for Calabi–Yau hypersurfaces via a tropical approach. The basic idea is that the leading term of the asymptotic expansion of the volume of the real locus given by the tropical volume of the base, but that the subleading term can be expressed in terms of the tropical singularities, and recovers the Gamma conjecture predictions. Roman Bezrukavnikov Applied quantization of Hitchin integrable system The talks will focus on aspects of geometric Langlands duality related to other chapters of geometry and representation theory, such as quantization in positive characteristic, Bridgeland stability conditions and abelian categories of representation theoretic interest. Ron Donagi Non-Abelian Hodge Theory and Geometric Langlands We will review the Geometric Langlands program, emphasizing its overlaps with homological mirror symmetry, non Abelian Hodge theory, and Hitchin's system, and will describe some recent results on the construction of automorphic sheaves in several geometrically accessible cases. Phillip Griffiths What is complex algebraic geometry? Algebraic geometry is the study of the geometry of algebraic varieties, defined as the solutions of a system of polynomial equations over a field \(\mathrm{k}\). When \(\mathrm{k} = \mathbb{C}\) the earliest deep results in the subject were discovered using analysis, and analytic methods (complex function theory, PDEs and differential geometry) continue to play a central and pioneering role in algebraic geometry. The objective of these talks is to present an informal, historical, and illustrative account of some answers to the question in the title. Every attempt will be made to have the talks accessible to an audience of graduate students and post docs. Slides: Part 1, Part 2, Part 3, Part 4 Nigel Hitchin Mirror symmetry for Higgs bundles The Strominger–Yau–Zaslow approach to mirror symmetry is well adapted to the Special Lagrangian fibration given by the Higgs bundle integrable system. The two talks will discuss various aspects of this including the semi-flat metric, duality of the fibres, and the questions raised by the conjectured symmetry between BAA branes, which are holomorphic Lagrangian submanifolds and BBB branes, which are supported on hyperkahler submanifolds. Slides: Part 1, Part 2 Maxim Kontsevich Duality with corners I will describe an algebraic structure which appears in several setups: 1. Poincare duality on manifolds with corners (here is the origin of the name), 2. Serre duality on an algebraic variety endowed with anticanonical divisor with normal crossings, 3. Interaction of mixed Hodge structures and Poincare duality on cohomology of smooth varieties with normal crossing divisors (by Goncharov), 4. A new clean construction of Fukaya and Fukaya-Seidel categories. The structure under the title is a variant of pre-Calabi-Yau structure by Vlassopoulos and mine, and of relative Calabi-Yau structure by Brav and Dyckerhoff. John Morgan Hodge Theory in Homotopy Theory In the first lecture we will review classical Hodge theory for the cohomology of compact Kahler manifolds and show how it is a consequence of d-dbar lemma. We will then review Sullivan's theory of differential algebras and rational homotopy theory and deduce the formality of the rational homotopy type of a compact Kahler manifold. In the second lecture, we review Deligne's notion of Mixed Hodge Structures and reformulate the result of the first lecture in terms of a Mixed Hodge Structure on the homotopy type. We then show that the homotopy type of an open smooth complex algebraic varieties has a Mixed Hodge Structure extending that constructed by Deligne on the cohomology and deduce homotopy theoretic consequences. Slides: Part 1, Part 2 Tony Pantev Homological Mirror Symmetry and the mirror map for del Pezzo surfaces I will discuss the general mirror symmetry question for symplectic del Pezzo surfaces in a setup that goes beyond the Hori-Vafa construction. I will explain how homological mirror symmetry considerations lead to an explicit description of the mirror map and will discuss some consequences that can be checked directly. This is a joint work with Auroux, Katzarkov and Orlov. Konstanze Rietsch Mirror symmetry for some homogeneous spaces I will give an overview of mirror symmetry for homogeneous spaces \(G/P\) from a Lie-theoretic perspective. The picture is particularly nice for co-minuscule \(G/P\). Carlos Simpson Asymptotics of the monodromy of local systems — WKB problems and harmonic mappings to buildings We'll discuss families of connections or Higgs bundles approaching the divisor at infinity in the respective moduli spaces, and what kind of behavior is expected for the corresponding monodromy representation. This goes under the name "WKB problem". In turn, the exponents that appear are governed by a harmonic map to a building (Parreau). We'll then discuss our results with Katzarkov, Pandit and Noll: the derivative is given by the spectral curve associated to the WKB problem (due to Mochizuki for the Higgs case), and we have a program going towards the construction of a versal harmonic map to a pre-building that depends only on the spectral curve. Towards the construction of stability conditions for rank 3 spectral curves This is about current work in progress with Haiden, Katzarkov and Pandit. We would like to consider Fukaya–Seidel type categories of sections of a constant sheaf of categories such as \(A_2\) over Lagrangian graphs in the complex plane with boundary points. Following Kontsevich's idea of interpreting this situation as a limiting Fukaya category for a fibration, one expects that there should be a stability condition, combining base and fiber, whose stable objects have "special Lagrangian" representatives that are basically the Gaiotto–Moore–Neitzke spectral networks. We'll report on the current status of our progress in extracting the first destabilizing subobject of the Harder–Narasimhan filtration in this setting.
{"url":"http://hodge.mirrorsymmetry.ru/abstracts.html","timestamp":"2024-11-09T00:21:03Z","content_type":"text/html","content_length":"13379","record_id":"<urn:uuid:8db920cf-b809-4e81-9361-da0270d10a22>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00485.warc.gz"}
DPLS Scientific Calculator version 2.8.0 | DPLS Scientific Calculator 15.25 MB Category: Applications License: Dot Point Learning Systems This is a highly functional and easy to use Scientific Calculator. It incorporates extensive science tools including a triangle calculator, vector calculator, shape calculator, kinematic calculator, half-life calculator, maths calculator, gas laws calculator, statistical calculator, molar mass calculator, pH calculator and one of the most extensive measurement converters available. Over 300 commonly used compounds can be quickly called to list their name, formula, molecular mass and CAS number. Over 100 constants can be called to list their numerical value and uncertainty value. Over 300 formulas and equations are listed for science, maths, trigonometry and statistics. It lists over 200 science symbols, Greek symbols, maths and statistics symbols. Numerous science data reference systems can be called including SI units, derived quantities, maths laws, atomic structures, organic compounds, ions, homologous series, frequencies, shapes, angle types, conjugate pairs and acids and bases. The system contains extensive listings of electrical circuit symbols, piping and process symbols and material properties. It contains 8 colour coded periodic tables that can be used to find 30 types of property data for each element. It contains full electron configuration of the elements, radiation types and quantum numbers. Interactive flowcharts can be called for mechanical and electrical units. A glossary system contains the phonetic alphabet as well as extensive lists of science and medicine fields, and acronyms and abbreviations used in science and computing. It lists the activity series and describes chemical reaction types. It contains an event timer, yearly calendar and world times system. A help system explains operation, and system contents can be quickly located with a search system.
{"url":"https://filesbear.com/windows/business/applications/dpls-scientific-calculator/","timestamp":"2024-11-14T02:24:32Z","content_type":"text/html","content_length":"42622","record_id":"<urn:uuid:4f93e8b0-eda0-45b0-97aa-c6d8669b4159>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00673.warc.gz"}
Free Science and Video Lectures Online! New Physics Video Lectures Hello everyone! This month I've new physics video lectures. They include MIT thermodynamics and kinetics (36 lectures), MIT physics demos (mechanics, waves, electricity, magnetism, quantum physics), Stephen Hawking's TED talk on important questions about the universe, a lecture that goes beyond Einstein, Formula 1 aerodynamics, Journey into a Schwarzschild black hole (computer animation), World's most powerful microscope, and The mother of all time travel paradox. Thermodynamics & Kinetics (MIT Course 5.60) Lecture topics: State of a system, 0th law of thermodynamics, equation of state. Work, heat, first law. Internal energy, expansion work. Enthalpy. Adiabatic changes. Thermochemistry. Calorimetry. Second law. Entropy and the Clausius inequality. Entropy and irreversibility. Fundamental equation, absolute S, third law. Criteria for spontaneous change. Gibbs free energy. Multicomponent systems, chemical potential. Chemical equilibrium. Temperature, pressure and Kp. Equilibrium: application to drug design. Phase equilibria one component. Clausius-Clapeyron equation. Phase equilibria two components. Ideal solutions. Non-ideal solutions. Colligative properties. Introduction to statistical mechanics. Partition function (q) large N limit. Partition function (Q) many particles. Statistical mechanics and discrete energy levels. Model systems. Applications: chemical and phase equilibria. Introduction to reaction kinetics. Complex reactions and mechanisms. Steady-state and equilibrium approximations. Chain reactions. Temperature dependence, Ea, catalysis. Enzyme catalysis. Autocatalysis and oscillators. MIT Physics Demos Demos include: Faraday cage. Balloons in liquid nitrogen. Temperature effect on resistance. Rubber and glass rods and electricity. Pushing and pulling. Monkey and the gun. Dipole antenna. RLC circuits. Levitating magnet. Exploding wire, and many more. Stephen Hawking: Asking big questions about the universe Short lecture summary: Professor Stephen Hawking asks some big questions about our universe -- How did the universe begin? How did life begin? Are we alone? -- and discusses how we might go about answering them. Beyond Einstein Lecture description: The lecture is part of the World Science Festival 2008. Albert Einstein spent his last thirty years unsuccessfully searching for a 'unified theory' - a single master principle to describe everything in the universe, from tiny subatomic particles to immense clusters of galaxies. In the decades since, generations of researchers have continued working toward Einstein's dream. Renowned physicists Leonard Susskind, Jana Levin, Jim Gates, and prominent historian Peter Galison discussed what's been achieved and tackled pivotal questions. Would a unified theory reveal why there is a universe at all? Would it tell us why mathematics is adept at unraveling nature's mysteries? Might it imply we are one universe of many, and what would that mean for our sense of how we fit into the cosmos? Moderated by Nobel Laureate Paul Nurse. Formula 1 Aerodynamics Short video description: Basic view into aerodynamics of the Formula 1 car explained by Martin Brundle. The mother of all time travel paradox The paradox: The question is: Who is Jane’s mother, father, grandfather, grand mother, son, daughter, granddaughter, and grandson? The girl, the drifter, and the bartender, of course, are all the same person. These paradoxes can made your head spin, especially if you try to untangle Jane’s twisted parentage. If we drawJane’s family tree, we find that all the branches are curled inward back on themselves, as in a circle. We come to the astonishing conclusion that she is her own mother and father! She is an entire family tree unto herself. Journey into a Schwarzschild black hole Black hole video description: The simplest kind of black hole is a Schwarzschild black hole, which is a black hole with mass, but with no electric charge, and no spin. Karl Schwarzschild discovered this black hole geometry at the close of 1915 within weeks of Einstein presenting his final theory of General Relativity. World's Most Powerful Microscope Microscope description: Lawrence Berkeley National Lab recently turned on a $27 million electron microscope. Its ability to make images to a resolution half the width of a hydrogen atom made it the most powerful microscope in the world. Enjoy the videos! Related Posts
{"url":"https://freescienceonline.blogspot.com/2012/07/","timestamp":"2024-11-08T11:32:02Z","content_type":"application/xhtml+xml","content_length":"57953","record_id":"<urn:uuid:e439cf2a-f345-4b51-ae47-3fee9d22abf2>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00689.warc.gz"}
GE B4 Ready with Support – Department of Mathematics and Statistics Students in NON-STEM majors who have met one or none of the following criteria are placed in a GE B4 Ready with Support math/quantitative reasoning course Note: ACT and SAT Scores are not required but will be considered for math placement if the student submits test scores Students in STEM majors who have met one or none of the following criteria are placed in a GE B4 Ready with Support math/quantitative reasoning course
{"url":"https://www.csuchico.edu/math/math-placement/ge-math-ready-support.shtml","timestamp":"2024-11-04T10:27:48Z","content_type":"application/xhtml+xml","content_length":"33981","record_id":"<urn:uuid:45cff834-4872-40fd-ba55-d46493938025>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00318.warc.gz"}
Pro Problems Star Operations Pro Problems Number and Quantity Number Theory Star Operations Featured Pro Problems Solve an algebra problem involving a defined binary operation (the hashtag or pound symbol) Find the value of x that makes the equation true, using the non-commutative operation specified Find an ordered pair of values based on a definition of a star operation Diamond Operation - calculate the value of x in the given diamond operation A new operation involving fractions. Find the missing value Find an operation rule that will satisfy the table of values given Find the conditions under which the defined operation is commutative Star operation and pound operation - two new operations in one problem Full Directory Listing Commutative Operation Diamond Operation Fractional Operation Guess the Star h and k Hashtag Operation Non-Commutative Operation Star and Pound Operations Star Operation with Cubes Triangular Operation Blogs on This Site Reviews and book lists - books we love! The site administrator fields questions from visitors.
{"url":"https://www.theproblemsite.com/pro-problems/math/number-quantity/number-theory/star-operations/","timestamp":"2024-11-06T15:23:52Z","content_type":"text/html","content_length":"25414","record_id":"<urn:uuid:673f2be7-d79d-42f6-bc60-3bf94c26f013>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00710.warc.gz"}
Mixed finite element problems Mixed finite element problems# As indicated earlier, the finite element method can be used for PDEs that consist of multiple physical quantities. In this section, we will consider the Stokes problem, which can be formulated as \[\begin{split} - \nabla \cdot ( \nabla \mathbf{u}) + \nabla p &= \mathbf{f} \quad \text{in } \Omega \\ \nabla \cdot \mathbf{u} &= 0 \quad \text{in } \Omega \\ \end{split}\] where \(\mathbf{u}\) is the velocity field, \(p\) is the pressure field, and \(\mathbf{f}\) is a given source term. Symmetric variational form We prefer problems that have a symmetric structure. For this reason, we introduce \(\hat p=-p\) and rewrite the problem as \[\begin{split} - \nabla \cdot ( \nabla \mathbf{u} + \hat p I) &= \mathbf{f} \quad \text{in } \Omega \\ \nabla \cdot \mathbf{u} &= 0 \quad \text{in } \Omega \\ \end{split}\] With this re-formulation, we can create the variational form by introducing a pair of test-functions, \(\mathbf{v}\in V\) \(p \in Q\) multiply each equation by the corresponding test-function, and integrate over the domain. \[\begin{split} \int_{\Omega} - \nabla \cdot ( \nabla \mathbf{u} + \hat p I) \cdot \mathbf{v} ~\mathrm{d}x &= \int_\Omega \mathbf{f} \cdot \mathbf{v}~\mathrm{d}x\\ \int_{\Omega} \nabla \cdot \mathbf {u} q ~\mathrm{d}x &= 0 \end{split}\] We integrate the first equation by parts to obtain \[\begin{split} \int_{\Omega} \nabla u : \nabla v ~\mathrm{d}x + \int_{\Omega} \hat p \nabla \cdot \mathbf{v} ~\mathrm{d}x -\int_{\partial \Omega} ((\nabla \mathbf{u} + \hat p I) \cdot \mathbf{n}) \ cdot \mathbf{v} ~\mathrm{d}s &= \int_\Omega \mathbf{f} \cdot \mathbf{v}~\mathrm{d}x\\ \int_{\Omega} \nabla \cdot \mathbf{u} q ~\mathrm{d}x &= 0 \end{split}\] The boundary term From integration by parts, we obtain a boundary term that depends on the normal derivative of the velocity field. Thankfully, we use the natural boundary condition for the Stokes problem, whereever we do not have a Dirichlet boundary condition on the velocity. In other words, we apply the following conditions on the boundary \(\partial \Omega = \partial\Omega_D\cup\Gamma\), where \(\partial\ Omega_D\) and \(\Gamma\) are two disjoint sets of the boundary. \[\begin{split} \mathbf{u} &= \mathbf{u}_D \quad \text{on } \partial\Omega_D\\ \nabla \mathbf{u} \cdot \mathbf{n} + \bar p \mathbf{n} &= \mathbf{g} \quad \text{on } \Gamma \end{split}\] This reduces the system to \[\begin{split} \int_{\Omega} \nabla u : \nabla v ~\mathrm{d}x + \int_{\Omega} \hat p \nabla \cdot \mathbf{v} ~\mathrm{d}x &= \int_\Omega \mathbf{f} \cdot \mathbf{v}~\mathrm{d}x + \int_{\partial \ Omega} \mathbf{g} \cdot \mathbf{v} ~\mathrm{d}s\\ \int_{\Omega} \nabla \cdot \mathbf{u} q ~\mathrm{d}x &= 0 \end{split}\] We start by setting up this variational formulation for a unit square domain \(\Omega = [0,1]\times[0,1]\). from mpi4py import MPI import dolfinx import basix.ufl import ufl import numpy as np M = 6 mesh = dolfinx.mesh.create_unit_square(MPI.COMM_WORLD, M, M) Finite elements and mixed function space# Next, we define the finite element spaces we would like to use for the velocity and pressure fields. We do this as descibed in Introduction to the Unified Form Language and use basix.ufl.mixed_element to create a mixed element for the velocity and pressure fields. We choose the Taylor-Hood finite element pair for this problem. el_u = basix.ufl.element("Lagrange", mesh.basix_cell(), 3, shape=(mesh.geometry.dim,)) el_p = basix.ufl.element("Lagrange", mesh.basix_cell(), 2) el_mixed = basix.ufl.mixed_element([el_u, el_p]) Test and trial functions in mixed spaces# We can now define our mixed function space and the corresponding test and trial functions W = dolfinx.fem.functionspace(mesh, el_mixed) u, p = ufl.TrialFunctions(W) v, q = ufl.TestFunctions(W) Test and trial functions in mixed spaces We observe that we use ufl.TrialFunctions and ufl.TestFunctions to define the trial and test functions for the mixed space, rather than ufl.TestFunction and ufl.TrialFunction. We could use the latter, but would then either have to index the corresponding functions or use ufl.split to extract the components. To see alternative approach for defining the test and trial functions expand the below cell Show code cell source Hide code cell source w = ufl.TrialFunction(W) u, p = ufl.split(w) u = ufl.as_vector([w[0], w[1]]) p = w[2] Next we define a function in wh in W to represent the solution. wh = dolfinx.fem.Function(W) We can split this function into symbolic components for the velocity and pressure fields with ufl.split Boundary integrals# So far, we have only considered the integrals over the domain \(\Omega\). We also need to consider the integrals over the boundary. We do this by introducing the exterior facet measure, called ds in UFL. This measure will only consist of facet integrals over those facets that are connected to a signle cell. ds = ufl.Measure("ds", domain=mesh) We create a function g to represent any any potential natural boundary conditions g = dolfinx.fem.Constant(mesh, dolfinx.default_scalar_type((0, 0))) Variational form# We can now define the variational formulation f = dolfinx.fem.Constant(mesh, dolfinx.default_scalar_type((0, 0))) F = ufl.inner(ufl.grad(u), ufl.grad(v)) * ufl.dx F += ufl.inner(p, ufl.div(v)) * ufl.dx F += ufl.inner(ufl.div(u), q) * ufl.dx F -= ufl.inner(f, v) * ufl.dx We can do as previosly, and split this into a bi-linear and linear form with Locating a subset of entities on a boundary# We now want to apply a Dirichlet condition on the degrees of freedom on some subset of the boundary. We start by locating some sub-set of facets on the boundary, by use dolfinx.mesh.locate_entities_boundary. Let’s say we want to prescibe the conditions: \[\begin{split} u(0, y) &= (ux, uy) \\ u(x, 0) &= u(1, y) = (0, 0) \end{split}\] We would have to locate what degrees of freedom that are in the closure of this boundary. Previously, we used dolfinx.mesh.exterior_facet_indices to locate all the facets on the boundary. However, in this case, we only want to find a subset of facets on the boundary. We therefore use dolfinx.mesh.locate_entities_boundary: 1. Define a function boundary_marker that takes in a set of coordinates x on the form (3, num_points) and returns an array of length num_points indcating if the point is on the boundary. For instance, for the case \(u(0, y) = (ux, uy)\), we would have def boundary_marker(x): return np.isclose(x[0], 0.0) mesh.topology.create_connectivity(mesh.topology.dim - 1, mesh.topology.dim) left_facets = dolfinx.mesh.locate_entities_boundary(mesh, mesh.topology.dim - 1, boundary_marker) left_facets=array([ 73, 76, 91, 103, 112, 118], dtype=int32) What points are sent into locate entitites boundary? The function checks that all vertices of a given entity satisfies the input constraint. Whenever we have a union of constraints, we can use the numpy bit operations & (and) and | (or) to combine the constraints. Create a boundary marker for the top and bottom boundary and locate the facets on those boundaries We can use numpy.isclose for each of the cases, and combine them with numpy.bitwise_or or | Expand the below to see the solution def top_bottom_marker(x): return np.isclose(x[1], 1.0) | np.isclose(x[1], 0.0) tb_facets = dolfinx.mesh.locate_entities_boundary(mesh, mesh.topology.dim - 1, top_bottom_marker) For completeness, we find the remaining facets on the boundary all_boundary_facets = dolfinx.mesh.exterior_facet_indices(mesh.topology) remaining_facets = np.setdiff1d(all_boundary_facets, np.union1d(tb_facets, left_facets)) Dirichlet conditions in mixed spaces# Now that we have found the boundaries of interest, we can create the Dirichlet conditions We start by considering what we have already seen: In the previous sections, Dirichlet conditions have been applied as input to dolfinx.fem.dirichletbc in the following way: 1. A function \(w_D\in W\) from the space that contains the Dirichlet condition values 2. A list of the dofs in the space \(W\) that should be constrained However, what happens if we use this strategy on a mixed space? As w_D is a mixed function, we have both pressure and velocity components in this space. Thus, if we set all entries in the BC to a constant value, we would set both the pressure and velocity to the same value. We illustrate this below w_D = dolfinx.fem.Function(W) w_D.x.array[:] = 0.43 wh = dolfinx.fem.Function(W) dofs = dolfinx.fem.locate_dofs_topological(W, mesh.topology.dim - 1, tb_facets) bc = dolfinx.fem.dirichletbc(w_D, dofs) Show code cell source Hide code cell source import pyvista import os, sys if sys.platform == "linux" and (os.getenv("CI") or pyvista.OFF_SCREEN): def visualize_mixed(mixed_function: dolfinx.fem.Function, scale=1.0): u_c = mixed_function.sub(0).collapse() p_c = mixed_function.sub(1).collapse() u_grid = pyvista.UnstructuredGrid(*dolfinx.plot.vtk_mesh(u_c.function_space)) # Pad u to be 3D gdim = u_c.function_space.mesh.geometry.dim assert len(u_c) == gdim u_values = np.zeros((len(u_c.x.array) // gdim, 3), dtype=np.float64) u_values[:, :gdim] = u_c.x.array.real.reshape((-1, gdim)) # Create a point cloud of glyphs u_grid["u"] = u_values glyphs = u_grid.glyph(orient="u", factor=scale) plotter = pyvista.Plotter() plotter.add_mesh(u_grid, show_edges=False, show_scalar_bar=False) p_grid = pyvista.UnstructuredGrid(*dolfinx.plot.vtk_mesh(p_c.function_space)) p_grid.point_data["p"] = p_c.x.array plotter_p = pyvista.Plotter() plotter_p.add_mesh(p_grid, show_edges=False) A convenience function for visualizing the velocity and pressure fields is found by expanding the cell below Show code cell source Hide code cell source What we want to do instead is to only apply the boundary condition to the velocity sub-space. We do this by first getting a function in the sub-space of the velocity field: W0 = W.sub(0) V, V_to_W0 = W0.collapse() What is the difference between W0 and V? Whey you call the sub-command on a dolfinx function space (or function), you get a view into the sub-space, i.e. you will get a dofmap that only contains the degrees of freedom that are in the sub-space. However, the global dof numbering is still preserved, meaning that a dof in the sub-space will have the same index as in the global space. It also means that a function accessed through u.sub(i) will contain all the degrees of freedom in the global space. We use W0.collapse() to get a self contained function space of only the degrees of freedom in the sub-space. We also get back a map frome each degree of freedom in the sub space to the degree of freedom in the parent space. This can be super-useful when we want to assign data from collapsed sub spaces to the parent space! With the collapsed function space, we can create a function in the sub-space of the velocity field u_D = dolfinx.fem.Function(V) u_D.x.array[:] = 0.11 Next, we want to fund the dofs of the collapsed sub space, and map them to the parent space sub_dofs = dolfinx.fem.locate_dofs_topological(V, mesh.topology.dim - 1, tb_facets) However, these dofs are in a blocked (vector space) and one would have to expand them for block size unrolled_sub_dofs = np.empty(len(sub_dofs) * V.dofmap.bs, dtype=np.int32) for i, dof in enumerate(sub_dofs): for j in range(V.dofmap.bs): unrolled_sub_dofs[i * V.dofmap.bs + j] = dof * V.dofmap.bs + j We can now map them to the parent space parent_dofs = np.asarray(V_to_W0)[unrolled_sub_dofs] We could also do this as a one-liner with combined_dofs = dolfinx.fem.locate_dofs_topological((W0, V), mesh.topology.dim - 1, tb_facets) Let us create a Dirichlet condition with these degrees of freedom. We now pass in the prescribing function u_D first, then the tuple of (parent_dofs, sub_dofs). As a final argument, we tell the Dirichlet condition what space we are working with, in this case W0 new_bc = dolfinx.fem.dirichletbc(u_D, combined_dofs, W0) new_wh = dolfinx.fem.Function(W) Sub-spaces of sub spaces As we in some cases only want to constrain one of the components of the vector space, say \(\mathbf{u}=(u_x, u_y)\), \(u_y=h(x,y)\), while \(u_x\) is unconstrained. We can do this by repeating the strategy above, but with W.sub(0).sub(1) and its corresponding collapsed space. We can of course do the same for the pressure conditions, by collapsing the second sub-space. For the cases above \((u_x, u_y)\) where constant in both directions, thus you might wonder why we did just send in a constant value. This was simply done for illustrative purposes. We will illustrate the final way of applying a Dirichlet condition uy_D = dolfinx.fem.Constant(mesh, dolfinx.default_scalar_type(0.3)) sub_dofs = dolfinx.fem.locate_dofs_topological(W.sub(0).sub(1), mesh.topology.dim - 1, left_facets) bc_constant = dolfinx.fem.dirichletbc(uy_D, sub_dofs, W.sub(0).sub(1)) Note that this looks quite similar to how we sent in collapsed sub functions. However, we do no longer require a map from the the collapsed space (reflected in the input to locate_dofs_topological). Dirichlet boundary conditions on sub-spaces with a block size In DOLFINx, we can not use the above syntax for a vector constant that is applied to W.sub(0). One can either constrain each component with a constant individually, or use a function as shown previously. Show code cell source Hide code cell source newer_wh = dolfinx.fem.Function(W) For the following exercise, we set ux=1, uy=0 and \(\mathbf{g}=\mathbf{f}=0\). Expand the below cell to see the solution Show code cell source Hide code cell source W0 = W.sub(0) V, V_to_W0 = W0.collapse() u_inlet_x = dolfinx.fem.Constant(mesh, dolfinx.default_scalar_type(1)) u_inlet_y = dolfinx.fem.Constant(mesh, dolfinx.default_scalar_type(0)) dofs_inlet_x = dolfinx.fem.locate_dofs_topological(W.sub(0).sub(0), mesh.topology.dim - 1, left_facets) dofs_inlet_y = dolfinx.fem.locate_dofs_topological(W.sub(0).sub(1), mesh.topology.dim - 1, left_facets) bc_inlet_x = dolfinx.fem.dirichletbc(u_inlet_x, dofs_inlet_x, W.sub(0).sub(0)) bc_inlet_y = dolfinx.fem.dirichletbc(u_inlet_y, dofs_inlet_y, W.sub(0).sub(1)) u_wall = dolfinx.fem.Function(V) u_wall.x.array[:] = 0 dofs_wall = dolfinx.fem.locate_dofs_topological((W0, V), mesh.topology.dim - 1, tb_facets) bc_wall = dolfinx.fem.dirichletbc(u_wall, dofs_wall, W0) a_compiled = dolfinx.fem.form(a) L_compiled = dolfinx.fem.form(L) A = dolfinx.fem.create_matrix(a_compiled) b = dolfinx.fem.create_vector(L_compiled) A_scipy = A.to_scipy() bcs = [bc_inlet_x, bc_inlet_y, bc_wall] dolfinx.fem.assemble_matrix(A, a_compiled, bcs=bcs) dolfinx.fem.assemble_vector(b.array, L_compiled) dolfinx.fem.apply_lifting(b.array, [a_compiled], [bcs]) [bc.set(b.array) for bc in bcs] import scipy.sparse A_inv = scipy.sparse.linalg.splu(A_scipy) wh = dolfinx.fem.Function(W) wh.x.array[:] = A_inv.solve(b.array) visualize_mixed(wh, scale=0.1) A matrix block system We notice that we can write this system as a block matrix system \[\begin{split} \begin{bmatrix} A & B \\ B^T & 0 \end{bmatrix} \begin{bmatrix} \mathbf{u} \\ \bar p \end{bmatrix} = \begin{bmatrix} L_u \\ 0 \\ \end{bmatrix} \end{split}\]
{"url":"https://jsdokken.com/FEniCS-workshop/src/deep_dive/mixed_problems.html","timestamp":"2024-11-03T00:32:20Z","content_type":"text/html","content_length":"86001","record_id":"<urn:uuid:c2ed217b-f2a7-42c0-b1df-d313e6cc5232>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00366.warc.gz"}
Blockchain Is Not Under Threat from Quantum Computing In the cryptocurrency community, there is a lurking fear about quantum computing due to its potential ability to break cryptocurrencies and the encryption that secures them. Many actors in the crypto space are apprehensive about losing the security of their private keys as more headlines emerge touting imminent “quantum supremacy.” New advancements in quantum technology and algorithms could subvert established digital security features using two key types of attack: the storage attack and the transit attack. The former involves a malicious actor with quantum capabilities targeting vulnerable addresses to steal funds. The latter involves a malicious actor with large-scale quantum capabilities attempting to hijack a blockchain transaction in transit and transfer the funds to their address instead. However, the blockchain is not under any significant threat from quantum computing for the following reasons: Current Cryptography Techniques Whereas it is easy to multiply small numbers to generate giant ones, going in the opposite direction is significantly harder; it is impossible to look at a number and tell its factors. This principle is the basis of one of the most popular data encryption forms, the RSA. RSA security can only be decrypted by factoring the product of two prime numbers, each of which is usually hundreds of digits long. These numbers serve as unique keys to a problem that is effectively unsolvable if you need to know the answers. In 1995, however, mathematician Peter Shor of AT&T Bell Laboratories developed a new algorithm for factoring prime numbers, whatever the size. IBM built a device for quantum computing in 2001, with seven qubits made from atomic nuclei, to demonstrate Shor’s algorithm. The experiment was a success, and the machine ran Shor’s algorithm to factor 15 into 5 and 3—hardly an impressive calculation but an outstanding achievement simply proving the algorithm works in practice. Theoretically, a powerful enough quantum computer could implement Shor’s algorithm to hack everything from bank records to personal files. Cryptocurrencies like Bitcoin, Ethereum and others are developed using blockchain technology that allows parties to perform peer-to-peer transactions in a system that is not controlled by a centralized authority. Instead, blockchain provides a trust framework that is sustained by cryptographic algorithms. Cryptocurrencies are secured through public key cryptography, which combines a public key that anyone can see with a private key for each user’s eyes only. Bitcoin uses the SHA-256 cryptographic protocol, which today’s computers cannot break. The crypto community fears that as quantum computers evolve, there is an increased potential for quantum-equipped actors to steal huge quantities of cryptocurrencies by abusing the computational advantage offered by quantum computers. New advancements in quantum technology and algorithms could subvert established digital security features using two key types of attack: the storage attack and the transit attack. The former involves a malicious actor with quantum capabilities targeting vulnerable addresses to steal funds, while the latter involves a malicious actor with large-scale quantum capabilities attempting to hijack a blockchain transaction in transit and transfer the funds to their address instead. Classic Computers versus Quantum Computers Quantum computing is undoubtedly vastly different from classical computing. Indeed, according to physicist Shohini Gose of Wilfrid Laurier University, the difference between classical computing and quantum computing is akin to the difference between light bulbs and candles; the light bulb is not merely a better candle but something entirely different altogether. Quantum computing is a process that uses the laws of quantum mechanics to solve problems that are too large or complex for classical computers, using multidimensional quantum algorithms. While classical computers utilize binary bits, the basic unit of information in quantum computing is known as qubits. Binary bits are often silicon-based chips and can only represent two states, 0 or 1. On the other hand, qubits come in different forms depending on the architecture of the particular quantum systems since some require extremely cold temperatures to work properly. They can be made from photons, trapped ions, and artificial or real atoms. Furthermore, qubits use superposition to be in multiple states simultaneously. A qubit can be 0 or 1 and any part of 0 and 1 in a superposition of both states. Unlike classic supercomputers, quantum computers are more elegant, smaller, and require less energy. An IBM quantum processor is a wafer much the same size as the one found in a laptop, whereas its hardware system is approximately the size of a car and consists mostly of cooling systems to maintain the superconducting processor at its ultra-cold operational temperature. To comprehend how quantum computing works, it is necessary to understand entanglement, superposition, and quantum interference. Superposition, Entanglement, and Interference To explain superposition, some use the analogy of Schrodinger’s cat, while others refer to the moments in which a coin is in the air during a coin toss. Simply put, quantum superposition refers to the scenario where quantum particles combine all possible states; they continue to fluctuate and move while the quantum computer measures and observes each one. Significantly, rather than the two-things-at-once point of focus, superposition is the ability to view quantum states in multiple ways and ask them different questions; essentially, unlike a traditional computer that has to perform tasks sequentially, a quantum computer can run an enormous number of parallel operations. Quantum entanglement refers to the state where quantum particles can correspond to measurements. In this state of entanglement, measurements taken from one qubit can be used to make conclusions about other units; when particles are entangled, none of them can be described without reference to the others. Entanglement enables quantum computers to calculate bigger stores of data and information, thereby solving larger problems. Quantum Interference As qubits undergo superposition, they are also susceptible to quantum interference which is the probability of qubits collapsing one way or another. Qubits require a great degree of maintenance as any number of simple actions or variables risk sending error-prone qubits into decoherence. Merely using a quantum computer to measure qubits or execute operations is enough to crash it. Additionally, even small vibrations or temperature changes will cause decoherence of qubits. For this reason, quantum computers are kept isolated, with the ones that use superconducting circuits being kept at near absolute zero- or -460 degrees Fahrenheit. According to Jonathan Carter, a scientist at Lawrence Berkeley National Library, the two challenges that need to be overcome are imbuing individual qubits with better fidelity and arranging them to form logical qubits. Carter estimates that to form one fault-tolerant qubit, “hundreds to thousands to tens of thousands of physical qubits” will be required, and he posits that none of the technology available at the moment could scale to those levels. Why Blockchain Is Not at Risk from Quantum Computing The good news for crypto enthusiasts is that the quantum computing problem can be fixed by implementing post-quantum cryptography technology, which is emerging simultaneously as blockchain. The United States National Institute of Standards and Technology (NIST) is, in the same vein, trying to get ahead of the challenge and is currently seeking out quantum-proof cryptography algorithms with the involvement of researchers worldwide. Quantum computers are also notoriously prone to crashing due to noise and decoherence. They are very delicate and can crash because they are put into use or due to tiny vibrations. Quantum computers must operate under special conditions, such as extremely low temperatures or being kept in a vacuum chamber. They also make it difficult to build a stable enough quantum computer to break the blockchain’s encryption. Additionally, we still need more computational power to threaten cryptocurrencies; the computing power to carry out a storage attack, for instance, is estimated at around 10 million qubits, significantly higher than the hundred or so bits currently available. Even with billions of dollars worth of investment from governments and the world’s biggest corporations signalling that the race for quantum capabilities is well underway, serious quantum computers are still a way off.
{"url":"https://cryptodiaries.net/2023/01/05/blockchain-is-not-under-threat-from-quantum-computing/","timestamp":"2024-11-11T13:20:22Z","content_type":"text/html","content_length":"83113","record_id":"<urn:uuid:9b7d03d0-1217-4dc1-87dd-26b3ebc82261>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00430.warc.gz"}
Truncated SVD for Dimensionality Reduction in Sparse Feature Matrices Discussing how truncated SVD differs from normal SVD Sparse feature matrices require special dimensionality reduction techniques such as Truncated Singular Value Decomposition (Truncated SVD) as most of the values in the matrix are zero! Sparse representation of a matrix A feature matrix refers to a matrix with all input features and is typically represented by the variable, X. It is the training dataset that we use for training the model. When most of the values in the feature matrix are zero, it is often represented as a sparse matrix to save memory and computing time. The following matrix has many zero elements. We can convert it to a sparse matrix using the following code. from scipy.sparse import csr_matrix X_sparse = csr_matrix(X) # Where X refers to the above matrix (numpy array) We get the following output. In the sparse representation, only non-zero elements are stored in the format of (row, column) value. For example, (1, 0) 1 denotes the value 1 is in the 2nd row and the first column in the matrix (indices begin with zero).
{"url":"https://rukshanpramoditha.medium.com/truncated-svd-for-dimensionality-reduction-in-sparse-feature-matrices-c083b4af7ddc?responsesOpen=true&sortBy=REVERSE_CHRON&source=read_next_recirc-----64cfc51168c4----1---------------------72e247ab_ea77_4e0d_95c4_fe761bd419a4-------","timestamp":"2024-11-05T22:44:21Z","content_type":"text/html","content_length":"93955","record_id":"<urn:uuid:7d2b7427-58c0-4654-ae0e-5a1f57a3f560>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00517.warc.gz"}
Propagation of Uncertainty Suppose that you need to add a reagent to a flask by several successive transfers using a class A 10-mL pipet. By calibrating the pipet (see Table 4.8), you know that it delivers a volume of 9.992 mL with a standard deviation of 0.006 mL. Since the pipet is calibrated, we can use the standard deviation as a measure of uncertainty. This uncertainty tells us that when we use the pipet to repetitively deliver 10 mL of solution, the volumes actually delivered are randomly scattered around the mean of 9.992 mL. If the uncertainty in using the pipet once is 9.992 ± 0.006 mL, what is the un- certainty when the pipet is used twice? As a first guess, we might simply add the un- certainties for each delivery; (9.992 mL + 9.992 mL) ± (0.006 mL + 0.006 mL) = 19.984 ± 0.012 mL It is easy to see that combining uncertainties in this way overestimates the total un- certainty. Adding the uncertainty for the first delivery to that of the second delivery assumes that both volumes are either greater than 9.992 mL or less than 9.992 mL. At the other extreme, we might assume that the two deliveries will always be on op- posite sides of the pipet’s mean volume. In this case we subtract the uncertainties for the two deliveries, (9.992 mL + 9.992 mL) ± (0.006 mL – 0.006 mL) = 19.984 ± 0.000 mL underestimating the total uncertainty. So what is the total uncertainty when using this pipet to deliver two successive volumes of solution? From the previous discussion we know that the total uncer- tainty is greater than ±0.000 mL and less than ±0.012 mL. To estimate the cumula- tive effect of multiple uncertainties, we use a mathematical technique known as the propagation of uncertainty. Our treatment of the propagation of uncertainty is based on a few simple rules that we will not derive. A more thorough treatment can be found elsewhere. A Few Symbols Propagation of uncertainty allows us to estimate the uncertainty in a calculated re- sult from the uncertainties of the measurements used to calculate the result. In the equations presented in this section the result is represented by the symbol R and the measurements by the symbols A, B, and C. The corresponding uncertainties are s[R], s[A], s[B], and s[C]. The uncertainties for A, B, and C can be reported in several ways, in- cluding calculated standard deviations or estimated ranges, as long as the same form is used for all measurements. Uncertainty When Adding or Subtracting When measurements are added or subtracted, the absolute uncertainty in the result is the square root of the sum of the squares of the absolute uncertainties for the in- dividual measurements. Thus, for the equations R = A + B + C or R = A + B – C, or any other combination of adding and subtracting A, B, and C, the absolute uncer- tainty in R is Uncertainty When Multiplying or Dividing When measurements are multiplied or divided, the relative uncertainty in the result is the square root of the sum of the squares of the relative uncertainties for the indi- vidual measurements. Thus, for the equations R = A x B x C or R = A x B/C, or any other combination of multiplying and dividing A, B, and C, the relative uncertainty in R is Uncertainty for Mixed Operations Many chemical calculations involve a combination of adding and subtracting, and multiply and dividing. As shown in the following example, the propagation of un- certainty is easily calculated by treating each operation separately using equations 4.6 and 4.7 as needed. Uncertainty for Other Mathematical Functions Many other mathematical operations are commonly used in analytical chemistry, including powers, roots, and logarithms. Equations for the propagation of uncer- tainty for some of these functions are shown in Table 4.9. Is Calculating Uncertainty Actually Useful? Given the complexity of determining a result’s uncertainty when several mea- surements are involved, it is worth examining some of the reasons why such cal- culations are useful. A propagation of uncertainty allows us to estimate an ex- pected uncertainty for an analysis. Comparing the expected uncertainty to that which is actually obtained can provide useful information. For example, in de- termining the mass of a penny, we estimated the uncertainty in measuring mass as ±0.002 g based on the balance’s tolerance. If we measure a single penny’s mass several times and obtain a standard deviation of ±0.020 g, we would have reason to believe that our measurement process is out of control. We would then try to identify and correct the problem. A propagation of uncertainty also helps in deciding how to improve the un- certainty in an analysis. In Example 4.7, for instance, we calculated the concen- tration of an analyte, obtaining a value of 126 ppm with an absolute uncertainty of ±2 ppm and a relative uncertainty of 1.6%. How might we improve the analy- sis so that the absolute uncertainty is only ±1 ppm (a relative uncertainty of 0.8%)? Looking back on the calculation, we find that the relative uncertainty is determined by the relative uncertainty in the measured signal (corrected for the reagent blank) Of these two terms, the sensitivity’s uncertainty dominates the total uncertainty. Measuring the signal more carefully will not improve the overall uncertainty of the analysis. On the other hand, the desired improvement in uncertainty can be achieved if the sensitivity’s absolute uncertainty can be decreased to ±0.0015 ppm–1. As a final example, a propagation of uncertainty can be used to decide which of several procedures provides the smallest overall uncertainty. Preparing a solu- tion by diluting a stock solution can be done using several different combina- tions of volumetric glassware. For instance, we can dilute a solution by a factor of 10 using a 10-mL pipet and a 100-mL volumetric flask, or by using a 25-mL pipet and a 250-mL volumetric flask. The same dilution also can be accom- plished in two steps using a 50-mL pipet and a 100-mL volumetric flask for the first dilution, and a 10-mL pipet and a 50-mL volumetric flask for the second di- lution. The overall uncertainty, of course, depends on the uncertainty of the glassware used in the dilutions. As shown in the following example, we can use the tolerance values for volumetric glassware to determine the optimum dilution strategy.
{"url":"https://www.brainkart.com/article/Propagation-of-Uncertainty_29568/","timestamp":"2024-11-06T08:01:14Z","content_type":"text/html","content_length":"81206","record_id":"<urn:uuid:a80a0363-ff54-4305-a867-cb98e6fe3514>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00351.warc.gz"}
Rankine to Fahrenheit Converter | Precise Temperature Conversion Tool Rankine to Fahrenheit Conversion What is Rankine to Fahrenheit Conversion? Rankine to Fahrenheit conversion is the process of transforming a temperature value from the Rankine scale to the Fahrenheit scale. The Rankine scale, like the Kelvin scale, is an absolute temperature scale, meaning it starts at absolute zero. However, it uses the Fahrenheit degree for its unit increment. The Fahrenheit scale is commonly used in everyday life in the United States and a few other countries. Understanding this conversion is important for engineers, scientists, and students working with different temperature scales, especially in thermodynamics and heat transfer Conversion Formula The formula to convert Rankine (°R) to Fahrenheit (°F) is: \[°F = °R - 459.67\] • °F = Temperature in Fahrenheit • °R = Temperature in Rankine Step-by-Step Calculation Example Let's convert 671.67°R to Fahrenheit: 1. Start with the Rankine temperature: 671.67°R 2. Subtract 459.67 from the Rankine temperature: \[671.67 - 459.67 = 212\] Therefore, 671.67°R is equal to 212°F. Visual Representation This bar represents the temperature on a scale from 0°R (-459.67°F) to 671.67°R (212°F). The blue portion shows where 212°F (671.67°R) falls on this scale. Important Temperature Points • Absolute zero: 0°R = -459.67°F • Freezing point of water: 491.67°R = 32°F • Boiling point of water: 671.67°R = 212°F • Normal human body temperature: 558.27°R ≈ 98.6°F • Room temperature: 527.67°R - 531.67°R ≈ 68°F - 72°F Understanding Rankine to Fahrenheit conversion is essential for anyone working with temperature measurements across different systems, particularly in engineering and scientific fields where absolute temperature scales are often used. While Rankine is less common in everyday life, it's important in certain technical applications. This converter bridges the gap between the absolute Rankine scale and the more familiar Fahrenheit scale, providing accurate conversions and educational insights into temperature measurement systems.
{"url":"https://www.thinkcalculator.com/temperature/rankine-to-fahrenheit.php","timestamp":"2024-11-08T21:17:45Z","content_type":"text/html","content_length":"31561","record_id":"<urn:uuid:782fcf23-d9bb-4428-a9fc-bb698994adfa>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00203.warc.gz"}
Reliability Analysis of Exponentiated Exponential Distribution for Neoteric and Ranked Sampling Designs with Applications Reliability Analysis of Exponentiated Exponential Distribution for Neoteric and Ranked Sampling Designs with Applications Keywords: Exponentiated Exponential distribution, Neoteric ranked set sampling, Maximum likelihood method, Relative Efficiency The neoteric ranked set sampling (NRSS) scheme is an effective design compared to the usually ranked set sampling (RSS) scheme. Herein, we regard reliability estimation of the stress-strength (SS) model using the maximum likelihood procedure via NRSS and RSS designs. Assume that stress Y and strength X are exponentiated exponential random variables with the same scale parameter. Various sample strategies are used to evaluate the reliability estimator. We acquire an estimate of R when the samples of stress and strength random variables are chosen from the same sampling methods, such as RSS or NRSS. Furthermore, we derive R estimator when X and Y are chosen from RSS and NRSS, respectively, and vice versa. A simulation investigation is formed to assay and compare the accuracy of estimates for all proposed schemes. We conclude based on study outcomes that the reliability estimates of the stress-strength model via NRSS are more efficient than the others via RSS. Analysis of real data is displayed to investigate the usefulness of the proposed estimators. F. G. Akg¨ul and B. S. S¸eno˘glu, Estimation of P(X < Y ) using ranked set sampling for the Weibull distribution, Quality Technology & Quantitative Management, vol. 14, no. 3, pp. 296–309, 2017. F. G. Akg¨ul, S. Acıtas¸, and B. S¸eno˘glu, Inferences on stress-strength reliability based on ranked set sampling data in case of Lindley distribution, Journal of Statistical Computation and Simulation, vol. 88, no. 15, pp. 3018–3032, 2018. A. I. Al-Omari, The efficiency of L ranked set sampling in estimating the distribution function, Afrika Matematika, vol. 26, no. 7, pp. 1457–1466, 2015. A. I. Al-Omari and A. Haq, Novel entropy estimators of a continuous random variable, International Journal of Modeling, Simulation, and Scientific Computing, vol. 10, no. 2, 2019. A. I. Al-Omari, I. M. Almanjahie, A. S. Hassan, and H. F. Nagy, Estimation of the stress-strength reliability for exponentiated Pareto distribution using median and ranked set sampling methods, Materials and Continua, vol. 64, no. 2, pp. 835–857, 2020. M. F. Al-Saleh and S. A. Al-Hadhrami, Estimation of the mean of the exponential distribution using moving extremes ranked set sampling, Statistical Papers, vol. 44, no. 3, pp. 367–382, 2003. A. M. Almarashi, A. Algarni, A. S. Hassan, M. Elgarhy, F. Jamal, C. Chesneau, K. Alrashidi, W. K. Mashwani, and H. F. Nagy, A new estimation study of the stress-strength reliability for the Topp-Leone distribution using advanced sampling methods, Scientific Programming, 2021. A. M. Awad and M. K. Gharraf, Estimation of P(Y < X) in the Burr case: A comparative study, Communications in StatisticsSimulation and Computation, vol. 15, no. 2, pp. 389–403, 1986. R. Bantan, A. S. Hassan, and M. Elsehetry, Zubair Lomax distribution: properties and estimation based on ranked set sampling, Computers, Materials and Continua, vol. 65, pp. 2169–2187, 2020. R. Bantan, M. Elsehetry, A. S. Hassan, M. Elgarhy, D. Sharma, C. Chesneau, and F. Jamal, A two-parameter model: properties and estimation under ranked sampling, Mathematics, vol. 9, no. 11, pp. 1214–1214, 2021. Z. W. Birnbaum, On a use of the Mann-Whitney statistic, in Proceedings of the Third Berkeley Symposium in Mathematics, Statistics and Probability, volume 1, pp. 13–17, University of California Press, A. Eftekharian, M. Razmkhah, and J. Ahmadi, A flexible ranked set sampling schemes: Statistical analysis on scale parameter, Statistics, Optimization & Information Computing, vol. 9, no. 1, pp. 189–203, 2021. M. Esemen, S. Gurler, and B. Sevinc, Estimation of stress-strength reliability based on ranked set sampling for generalized exponential distribution, International Journal of Reliability, Quality and Safety Engineering, vol. 28, no. 2, pp. 1–24, 2021. R. D. Gupta and D. Kundu, Generalized exponential distributions, Australian & New Zealand Journal of Statistics, vol. 41, no. 2, pp. 173–188, 1999. R. D. Gupta and D. Kundu, Generalized exponential distribution: Existing results and some recent developments, Journal of Statistical Planning and Inference, vol. 137, no. 11, pp. 3537–3547, 2007. A. Haq, J. Brown, E. Moltchanova, and A. I. Al-Omari, Paired double-ranked set sampling, Communications in Statistics-Theory and Methods, vol. 45, no. 10, pp. 2873–2889, 2016. A. S. Hassan, Modified goodness of fit tests for exponentiated Pareto distribution under selective ranked set sampling, Australian Journal of Basic and Applied Sciences, vol. 6, no. 1, pp. 173–189, A. S. Hassan, Maximum likelihood and Bayes estimators of the unknown parameters for exponentiated exponential distribution using ranked set sampling, International Journal of Engineering Research and Applications, vol. 3, no. 1, pp. 720–725, 2013. A. S. Hassan, A. Al-Omari, and H. F. Nagy, Stress-strength reliability for the generalized inverted exponential distribution using MRSS, Iranian Journal of Science and Technology, vol. 45, no. 2, pp. 641–659, 2021. A.S. Hassan, S. Assar, and M. Yahya, Estimation of R = P[Y < X] for Burr type XII distribution based on ranked set sampling, International Journal of Basic and applied Sciences, vol. 3, no. 3, pp. 274–280, 2014. A. S. Hassan, S. Assar, and M. Yahya, Estimation of P(Y < X) for Burr distribution under several modifications for ranked set sampling, Australian Journal of Basic and Applied Sciences, vol. 9, no. 1, pp. 124–140, 2015. M. A. Hussian, Estimation of stress-strength model for generalized inverted exponential distribution using ranked set sampling, International Journal of Advances in Engineering & Technology, vol. 6, no. 6, pp. 2354–2362, 2014. Iranmanesh, K. F. Vajargah, and M. Hasanzadeh, On the estimation of stress strength reliability parameter of inverted gamma distribution, Mathematical Sciences, vol. 12, no. 1, pp. 71–77, 2018. S. Kotz, and M. Pensky, The Stress-Strength Model and its Generalizations: Theory and Applications, World Scientific, 2003. D. Kundu and R. D. Gupta, Estimation of P[Y < X] for generalized exponential distribution, Metrika, no. 3, pp. 291–308, 2005. G. A. McIntyre, A method for unbiased selective sampling using ranked sets, Australian Journal of Agricultural Research, vol. 3, no. 4, pp. 385–390, 1952. H. A. Muttlak, W. A. Abu-Dayyeh, M. F. Saleh, and E. Al-Sawi, Estimating P(Y < X) using ranked set sampling in case of the exponential distribution, Communications in Statistics-Theory and Methods, vol. 39, no. 10, pp. 1855–1868, 2010. S. Nadarajah, The Exponentiated Exponential Distribution: A Survey, Springer, 2011. M. M. Raqab and M. Ahsanullah, Estimation of the location and scale parameters of generalized exponential distribution based on order statistics, Journal of Statistical computation and Simulation, vol. 69, no. 2, pp. 109–123, 2001. S. Rezaei, R. Tahmasbi, and M. Mahmoodi, Estimation of P[Y < X] for generalized Pareto distribution, Journal of Statistical Planning and Inference, vol. 140, no. 2, pp. 480–494, 2010. M. A. Sabry and M. Shaaban, Dependent ranked set sampling designs for parametric estimation with applications, Annals of Data Science, vol. 7, no. 2, pp. 357–371, 2020. A. Sadeghpour, A. Nezakati, and M. Salehi, Comparison of two sampling schemes in estimating the stress-strength reliability under the proportional reversed hazard rate model, Statistics, Optimization & Information Computing, vol. 9, no. 1, pp. 82–98, 2021. A. Safariyan, M. Arashi, and R. Belaghi, Improved point and interval estimation of the stress-strength reliability based on ranked set sampling, Statistics, vol. 53, no. 1, pp. 101–125, 2019. K. Takahasi and K. Wakimoto, On unbiased estimates of the population mean based on the sample stratified by means of ordering, Annals of the Institute of Statistical Mathematics, vol. 20, no. 1, pp. 1–31, 1968. Z. Xia, J. Yu, L. Cheng, L. Liu, and W. Wang, Study on the breaking strength of jute fibres using modified Weibull distribution, Composites Part A: Applied Science and Manufacturing, vol. 40, no. 1, pp. 54–59, 2009. E. Zamanzade and A. I. Al-Omari, New ranked set sampling for estimating the population mean and variance, Hacettepe Journal of Mathematics and Statistics, vol. 45, no. 6, pp. 1891–1905, 2016. How to Cite Hassan, A., Elshaarawy, R., & Nagy, H. (2023). Reliability Analysis of Exponentiated Exponential Distribution for Neoteric and Ranked Sampling Designs with Applications. Statistics, Optimization & Information Computing, 11(3), 580-594. https://doi.org/10.19139/soic-2310-5070-1317 Research Articles Authors who publish with this journal agree to the following terms: a. Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal. b. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal. c. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).
{"url":"http://www.iapress.org/index.php/soic/article/view/1317","timestamp":"2024-11-02T01:10:29Z","content_type":"text/html","content_length":"33628","record_id":"<urn:uuid:2c3d8367-2974-4dd8-9907-0e261ca79c18>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00317.warc.gz"}
Full Tilt: Universal Constructors for General Shapes with Uniform External Forces Title: Full Tilt: Universal Constructors for General Shapes with Uniform External Forces Authors: Jose Balanza-Martinez, David Caballero, Angel A. Cantu, Luis Angel Garcia, Austin Luchsinger, Rene Reyes, Robert Schweller, and Tim Wylie Conference: The 30th ACM-SIAM Symposium on Discrete Algorithms (SODA’19), 2019. Abstract:We investigate the problem of assembling general shapes and patterns in a model in which particles move based on uniform external forces until they encounter an obstacle. In this model, corresponding particles may bond when adjacent with one another. Succinctly, this model considers a 2D grid of “open” and “blocked” spaces, along with a set of slidable polyominoes placed at open locations on the board. The board may be tilted in any of the 4 cardinal directions, causing all slidable polyominoes to move maximally in the specified direction until blocked. By successively applying a sequence of such tilts, along with allowing different polyominoes to stick when adjacent, tilt sequences provide a method to reconfigure an initial board configuration so as to assemble a collection of previous separate polyominoes into a larger shape. While previous work within this model of assembly has focused on designing a specific board configuration for the assembly of a specific given shape, we propose the problem of designing \emph {universal configurations} that are capable of constructing a large class of shapes and patterns. For these constructions, we present the notions of \emph{weak} and \emph{strong} universality which indicate the presence of “excess” polyominoes after the shape is constructed. In particular, for given integers $h,w$, we show that there exists a weakly universal configuration with $\mathcal{O}(hw)$ $1 \times 1$ slidable particles that can be reconfigured to build any $h \times w$ patterned rectangle. We then expand this result to show that there exists a weakly universal configuration that can build any $h \times w$-bounded size connected shape. Following these results, which require an admittedly relaxed assembly definition, we go on to show the existence of a strongly universal configuration (no excess particles) which can assemble any shape within a previously studied “Drop” class, while using quadratically less space than previous results. Finally, we include a study of the complexity of deciding if a particle within a configuration may be relocated to another position, and deciding if a given configuration may be transformed into a second given configuration. In both cases, we show this problem to be PSPACE-complete, even when movable particles are restricted to $1\times 1$ and $2\times 2$ polyominoes that do not stick to one Accompanying videos related to the paper Section 3: Pattern and General Shape Builder Construction of a Pattern: Construction of a Drop Shape: Construction of a Non-Drop Shape: Section 4: Drop Shape Builder Section 5: Relocation Gadget Example of Correct Traversal Through our Relocation Gadget: This video shows the correct sequence of tilts to traverse our robot through the gadget. Note the closed exit/entrance points of our gadgets do not affect this traversal, and are only there to simplify testing Example of Incorrect State Traversal Attempt Through our Relocation Gadget: Here we see the robot trying to traverse our gadget while the gadget is not in the correct state to allow the robot to do so. This is only one of the possible sequences, but no sequence of tilts exists that would allow the robot to traverse through the gadget from this position. Example of Robot Polyomino Becoming Stuck Through Correct State but Incorrect Traversal Sequence: Here is an example of the importance of our optimal sequence to traverse through the gadget, although there are many ways to traverse, there are also many ways to enter what we call “stuck” configuration. From these configurations there is no sequence of tilts that would allow the robot to leave the gadget. Section 6: Reconfiguration Gadget Example of Correct Traversal Through Our Reconfiguration Gadget: Example of Moving State Tiles to Reconfiguration Ring: Here we remove all tiles from the inner section of our gadget and into the “reconfiguration ring”. This property of our gadget is what allows us to achieve a global configuration of our gadgets and classify this problem as a reconfiguration problem. Example of A System of Reconfiguration Gadgets: This shows a system of reconfiguration gadgets and the robots traversal through the system. At the end of our traversal we move all our state tiles int the reconfiguration ring and achieve a specific configuration of our entire system. Example of Traversal Sequences that Preserve Initial State Tile Positions: The videos here show that we can easily traverse gadgets while maintaining the a small set of positions for our state tiles. We know that from these positions we can traverse the gadget with a sequence that will allow us to keep the positions of our red tiles within this set, while not moving any tiles that started in this set into the reconfiguration ring.
{"url":"https://asarg.hackresearch.com/main/2018/09/28/full-tilt-universal-constructors-for-general-shapes-with-uniform-external-forces/","timestamp":"2024-11-02T01:04:24Z","content_type":"text/html","content_length":"95565","record_id":"<urn:uuid:1b628a34-d94d-479e-8793-6c44a2ee33a3>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00526.warc.gz"}
Macd Signal Histogram One may choose to draw the MACD as a solid histogram, with the Signal as a continuous line overlaying the histogram. The drawing of the Signal or MACD lines/. A signal line is then generated by applying an EMA to the MACD line. And finally a histogram shows the difference between the MACD Line and the Signal line. The. This is a trend-following dynamic indicator that shows the correlation between two moving averages, generally a period and period SMA or WMA or EMA. You. MACD, short for moving average convergence/divergence, is a trading indicator used in technical analysis of securities prices, created by Gerald Appel in. A second line, called the Signal line is plotted as a moving average of the MACD. A third line, called the MACD Histogram is optionally plotted as a The MACD histogram is a chart that is often superimposed on the same axis as the two lines, and shows the MACD minus the value of the MACD signal line. The MACD. The last component is the MACD histogram, which is the difference between the MACD line and the signal line. However, the parameters of the MACD line can also. The histogram is at zero when MACD and the signal line cross (the signal for trading with the MACD). The histogram turns back towards the zero line when MACD. The Moving average Convergence divergence (MACD) is a popular technical indicator used by traders to identify potential trend reversals and generate buy or sell. The MACD Histogram (MACD-H) consists of vertical bars showing the difference between the MACD line and its signal line. • A change in the MACD-H will usually. The Histogram: A series of vertical lines representing the distance between the 'Signal line' and the 'MACD line'. Green bars above the 'Zero line' indicate. The histogram is the difference between the two MACD lines. Where the histogram crosses the zero line is the point where the two MACD lines are crossing (the. Overview · The MACD Histogram (MACD-H) consists of vertical bars showing the difference between the MACD line and its signal line · A change in the MACD-H will. The MACD Histogram (Bars): a graphical representation of the divergence and convergence of the MACD line and the signal line. In other words, the MACD histogram. Histograms: Histogram is an integral part of the MACD indicator and provides additional information about the momentum and strength of a trend. MACD Histogram: The MACD Histogram is the visual representation of the difference between the MACD Line and the Signal Line. It helps traders identify the. The MACD's histogram is a graphical representation of the distance between the MACD line and its signal line. It is used to identify periods of bullish or. The MACD (Moving Average Convergence/ Divergence) Histogram is a study that visualizes the difference between the main MACD plot and its signal line. Plotted as. The MACD histogram is where you just take the MACD line minus the signal line. Generally, the MACD histogram is a trend following indicator and a momentum. MACD's data can be represented with a histogram, which maps the distance between the MACD and its signal line. The histogram will be above the MACD's baseline. The histogram or “bar chart” included in the background of the MACD (see images below) displays the difference between the MACD and signal line. When the MACD. MACD Histogram is an indicator measuring the difference between MACD and the Signal Line (MA applied to MACD). MACD Histogram is used to track changes in the. The histogram in MACD is the difference between the MACD line and the signal line, plotted as a bar chart that oscillates above and below a zero line. Developed by Thomas Aspray in , the MACD-Histogram measures the distance between MACD and its signal line, is an oscillator that fluctuates above and below. The MACD Histogram. As time advances, the difference between the MACD Line and Signal Line will continually differ. The MACD histogram takes that difference and. MACD Histogram: This histogram represents the difference between the MACD and signal lines. Subtract the signal line from the MACD line to create the MACD. Traders use the MACD's histogram to identify when bullish or bearish momentum is high and possibly for overbought/oversold signals. The MACD histogram indicator, short for Moving Average Convergence Divergence histogram, is a popular tool used by traders to identify potential buy and sell. MACD Histogram is the difference of the MACD and the Signal line. The value of the difference is illustrated in a histogram form. Thus, the midline. The MACD lines fluctuate, converge, diverge, and cross over each other above and below the zero line of the MACD histogram producing an oscillator that reflects. Banks With Incentives To Open Account | What Apy Is Good
{"url":"https://gsrkro.site/overview/macd-signal-histogram.php","timestamp":"2024-11-03T23:24:52Z","content_type":"text/html","content_length":"10557","record_id":"<urn:uuid:f5ff4810-59c2-42fa-9f66-0456af22a5d1>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00335.warc.gz"}
Basic Java algorithms In order to deepen my understanding of the basics of Java and enhance my mastery of it, I started the road of hand rolling algorithm. While exercising my code ability, I also provided you with my practice topics here If there are problems or better algorithm solutions, please comment and leave a message. Come on, ori( ٩( 눈౪눈) و) 1. Yang Hui triangle The biggest feature of Yang Hui triangle is that each number is the sum of two numbers on the shoulder of the number. Here I print a right angle Yang Hui triangle, which uses a two-dimensional array to store values 1.1 core code import java.util.Scanner; /*Printing Yang Hui triangle - two dimensional array method*/ public class Demo01_YangHui { public static void main(String[] args) { int layer; Scanner sc = new Scanner(System.in); //The instantiated sc object is used to receive keyboard input System.out.print("Please enter the number of layers:"); layer = sc.nextInt(); //Enter the number of layers long[][] numArr = new long[layer][]; //Open up array space for (int i = 0; i < numArr.length; i++) { //Time for space numArr[i] = new long[i+1]; //Open up a two-dimensional array of isosceles triangle shapes for (int i = 0; i < layer; i++) { for (int j = 0; j <= i; j++) { numArr[i][j] = j == 0 || j == i ? 1 : numArr[i-1][j-1] + numArr[i-1][j]; //Assign values System.out.print(numArr[i][j] + "\t"); //Output Yang Hui triangle value 1.2 operation results 2. The rabbit gave birth to a baby in March The title is this: a pair of rabbits give birth to a pair of rabbits every month from the third month after birth. The little rabbit grows to another pair of rabbits every month after the third month. If the rabbits don't die, how many are the final pairs of rabbits? Problem solving idea: define three variables to record the number of rabbits from January to March, and finally output the sum of the three variables. The monthly values of the three variables change as follows: 1. March: add the quantity of February to its own basis 2. February: assign the value override of January to February 3. January: assign the value of March to January 2.1 core code public class Demo01_Rabbit { private int numberOfFirst; private int numberOfSecond; private int numberOfThird; private int sumOfRabbit; private int month; /*Parameterless construction and initialization*/ public Demo01_Rabbit(){ this.sumOfRabbit = 2; this.numberOfFirst = 1; this.numberOfSecond = 0; this.numberOfThird = 0; this.month = 0; /*Parametric structure, passing month*/ public Demo01_Rabbit(int month) { this.month = month; /*Main function*/ public static void main(String[] args) { System.out.println(new Demo01_Rabbit().countRabbit());//Print total number of rabbits /*Enter month*/ public int inputMonth(){ int month; System.out.print("Please input month: "); Scanner sc = new Scanner(System.in); month = sc.nextInt(); while(month < 0) { try { throw new Demo01_MonthException(month); //Throw illegal month exception manually } catch (Demo01_MonthException e) { System.out.println(e.toString() + "\n"); System.out.print("Please input again: "); //Re enter the month month = sc.nextInt(); sc.close(); //Close Scanner return month; /*Count the number of rabbits*/ public int countRabbit() { int month = new Demo01_Rabbit().inputMonth(); for (int i = 1; i < month; i++) { System.out.println("No." + i + " month -> One month: " + numberOfFirst + "\tTwo month: " + numberOfSecond + "\tThree month: " + numberOfThird); numberOfThird += numberOfSecond; numberOfSecond = numberOfFirst; numberOfFirst = numberOfThird; System.out.println("No." + month + " month -> One month: " + numberOfFirst + "\tTwo month: " + numberOfSecond + "\tThree month: " + numberOfThird); sumOfRabbit = (numberOfFirst + numberOfSecond + numberOfThird); //Calculate the total number of rabbits return sumOfRabbit; //Returns the total number of rabbits 2.2 operation results 3. Judgement prime Topic: judge how many primes there are between 101-200 and output all primes. Solution: according to the definition of prime, a number that can only be divided by 1 and itself is prime. We can reverse that when a number can be divided by a number other than 1 and itself, it is not a prime number, otherwise it will be output 3.1 core code public class Prime { public static void main(String[] args) { boolean isPrime; //Is the record prime for (int i = 101; i < 201; i++) { isPrime = true; //By default, all numbers are treated as prime numbers //Traverse from 2 to determine whether it is a prime number for (int j = 2; j < Math.sqrt(i); j++) { //Non prime case if(i%j == 0) { isPrime = false; //Prime case if(isPrime) { System.out.print(i + "\t"); 3.2 operation results 4. Calculation days Title: input the year, month and day from the keyboard to judge the day of the year Solution: the difficulty of this problem is that the number of days in each month is different and there is a difference between normal and leap years. Therefore, in order to solve this problem, we can use two one-dimensional arrays to record the number of days in each month in normal and leap years respectively, and then index and accumulate according to the month as the subscript 4.1 core code public class Demo01_CalculationDate { public Demo01_CalculationDate() { this.count = 0; private int count; //Total days private int year; //year private int month; //month private int day; //day private int[] normalYear = {31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31}; //Ordinary year private int[] leapYear = {31, 29, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31}; //leap year public static void main(String[] args) { Demo01_CalculationDate days = new Demo01_CalculationDate(); System.out.println("There are " + days.dayCount() + " days in " + days.date()); //Calculation method of days public int dayCount() { Scanner sc = new Scanner(System.in); System.out.print("Please input the year: "); this.year = sc.nextInt(); System.out.print("Please input the month: "); this.month = sc.nextInt(); System.out.print("Please input the day: "); this.day = sc.nextInt(); //Judge whether it is the number of days corresponding to leap year accumulation if(new Demo01_CalculationDate().isLeap(this.year)) { for (int i = 1; i < this.month; i++) { this.count += this.leapYear[i-1]; } else { for (int i = 1; i < this.month; i++) { this.count += this.normalYear[i-1]; return this.count+this.day; /*Determine whether it is a leap year*/ public boolean isLeap(int year) { if(year % 400 == 0 || ( year % 4 == 0 && year % 100 != 0 )) { return true; return false; /*Return date*/ public String date() { return "" + year + "." + month + "." + day; 4.2 operation results 5. Lottery game Title: suppose you want to develop a lottery game, the program randomly generates a two digit lottery, prompts the user to enter a two digit lottery, and then determines whether the user can win according to the following rules. 1. If the number entered by the user matches the actual order of the lottery, the bonus is $10000. 2. If all the numbers entered by the user match all the numbers of the lottery, but the order is inconsistent, the bonus is $3000. 3. If a number entered by the user matches only one number of the lottery ticket in order, a bonus of $1000 will be awarded. 4. If a number entered by the user meets only one number matching the lottery ticket in non sequential case, the bonus is $500. 5. If the number entered by the user does not match any number, the lottery will be voided. Problem solving idea: it is easy to think that the selection structure will be used here. The only difficulty is how to obtain the randomly generated double digits. In Java, we can call the random method in the Math class, which returns a double floating-point number with a range of [0.0, 1.0]. Therefore, we can obtain our target range [a, b] with the help of the formula (int)(random * (b - a + 1)) + a; 5.1 core code public class Demo02_Lottery { public static void main(String[] args) { Scanner sc = new Scanner(System.in); System.out.print("Please input the number you want: "); int playerNum = sc.nextInt(); //Player lottery number int randNum = (int) (Math.random() * 90) + 10; //Randomly generated number [10,99] System.out.println("The lucky number of this issue is " + randNum); if (playerNum == randNum) { System.out.println("Congratulations on winning $10000"); //the first prize } else if (playerNum % 10 == randNum / 10 && playerNum / 10 == randNum % 10) { System.out.println("Congratulations on winning $3000"); //second award } else if (playerNum % 10 == randNum % 10 || playerNum / 10 == randNum / 10) { System.out.println("Congratulations on winning $1000"); //third award } else if (playerNum % 10 == randNum / 10 || playerNum / 10 == randNum % 10) { System.out.println("Congratulations on winning $500"); //Excellence Award } else { System.out.println("Your number is wrong, thanks for participating"); //Thank you for your participation 5.2 operation results 6. Random generation of non repeating integers Title: create an int array with a length of 6. The required value is 1-30, and the element values are different Solution idea: random number generation still uses the above formula (int)(random * (b - a + 1)) + a to obtain our target range [a, b]; Traverse through the inner loop to determine whether the value 6.1 core code public class Demo01_Random { public static void main(String[] args) { int[] arr = new int[6]; label: for (int i = 0; i < arr.length; i++) { arr[i] = (int)(Math.random()*30) + 1; for (int j = 0; j < i; j++) { if(arr[i] == arr[j]) { continue label; //Returns the outer loop without printing out duplicate values System.out.print(arr[i] + "\t"); 6.2 operation results 7. Realization of square matrix of conformal number format Title: input an integer (1 ~ 20) from the keyboard, then take the number as the size of the matrix, and fill the numbers of 1,2,3... n*n in the form of clockwise spiral. If you enter 3, the following rectangular matrix will be printed: Problem solving idea: a very obvious idea is to assign values clockwise, and turn when encountering boundaries or assigned rows and columns. In my code, i use four loops. An outer loop wraps three inner loops of the same level. i control the assignment in four different directions by controlling i and j until the assignment reaches maxValue. • As long as this idea is clarified, the code will come out soon. When the recommended idea is confused, take out a piece of paper or open the drawing tool to draw the direction of assignment by 7.1 core code public class Demo02_Spiral { public static void main(String[] args) { Scanner sc = new Scanner(System.in); System.out.print("Please enter an integer:"); int size = sc.nextInt(); int row = 0; //Identifies the number of rows at the top that need to be assigned int value = 0; //Controls the value of the assignment int maxValue = size*size; //The maximum value is used to judge whether the assignment is completed int[][] arr = new int[size][size]; for (int i = 0, j = 0; i < size; i++) { if(value == maxValue) { //End flag arr[j][i] = ++value; //Controls the assignment of the upper right if(i == size-1) { j = ++row; while(j < size && value != maxValue) { //Controls the assignment from the right down arr[j++][i] = ++value; size--; //Controls the left indent of the right boundary j--; //One more in front of j + + while(--i != row-2 && value != maxValue) { //Controls the assignment of the lower left arr[j][i] = ++value; i++; //Front --i minus one more while(--j != row-1 && value != maxValue) { //Controls the left up assignment arr[j][i] = ++value; j++; //Front --j minus one more //Traversal output for (int i = 0; i < arr.length; i++) { for (int j = 0; j < arr[i].length; j++) { System.out.print(arr[i][j] + "\t"); 7.2 operation results 8. The hard problem ever You are a sub captain of Caesar's army. It is your job to decipher the messages sent by Caesar and provide to your general. The code is simple. For each letter in a plaintext message, you shift it five places to the right to create the secure message (i.e., if the letter is 'A', the cipher text would be 'F'). Since you are creating plain text out of Caesar's messages, you will do the opposite: Cipher text A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Plain text V W X Y Z A B C D E F G H I J K L M N O P Q R S T U Only letters are shifted in this cipher. Any non-alphabetical character should remain the same, and all alphabetical characters will be upper case. Input to this problem will consist of a (non-empty) series of up to 100 data sets. Each data set will be formatted according to the following description, and there will be no blank lines separating data sets. All characters will be uppercase. A single data set has 3 components: • Start line - A single line, "START" • Cipher message - A single line containing from one to two hundred characters, inclusive, comprising a single message from Caesar • End line - A single line, "END" Following the final data set will be a single line, "ENDOFINPUT". ——Less rigorous split line—— This is a topic (1048) above Hangzhou Electric OJ. I have read the topic for a while for those who are not good at English. Here I briefly explain this topic: 1. Core content: the system provides no more than 100 ciphertext data sets (here we enter manually). The ciphertext content is A non empty string, the length of the string is no more than 200, and the letters are all uppercase. What we need to do is to move the letters in the string forward by 5 positions (if the input is' F ', it will be converted to' A '; When the input is' A ', it will be converted to' V '), and other characters will not be processed. 2. Format control: enter ENDOFINPUT to indicate the END of the program. When we enter START, we START the ciphertext input, and enter END to mark the completion of ciphertext input. Problem solving idea: in the program, I defined a String type variable. Since String is immutable, a character array is also needed to store the data converted from String to character array, and then look for letters through circular traversal. Because 26 characters are a cycle, but there are more than 26 ASCII codes, so A ~ E needs to be added 21 and F ~ Z needs to be reduced 5 8.1 core code import java.util.Scanner; public class Demo01_TheHardestProblemEver { public static void main(String[] args) { Scanner sc = new Scanner(System.in); char[] text; while(true) { String string = sc.nextLine(); if(string.equals("ENDOFINPUT")) { //Program end flag }else if(string.equals("START") || string.equals("END")) { //Ciphertext start and end text = string.toCharArray(); //Convert string to character array //Ciphertext encryption core module for (int i = 0; i < text.length; i++) { if(text[i] >= 'A' && text[i] <= 'E') { System.out.print(text[i] += 21); } else if(text[i] >= 'F' && text[i] <= 'Z') { System.out.print(text[i] -= 5); } else { 8.2 operation results 9. Get mailbox name Title: enter an email address and intercept its email name. If the interception is successful, the mailbox name will be printed. Otherwise, the mailbox format error will be prompted. Such as: input Email@168.com , the program will intercept the mailbox name "email" and print it out. Solution: in this topic, we will use indexOf(String str) method and substring(int startIndex, int endIndex) method. • indexOf(String str): find the subscript of str in a string. If the index fails, return - 1; • substring(int startIndex, int endIndex): intercepts a string from the start subscript startIndex to the end subscript endIndex, including the start subscript and not the end subscript 9.1 core code import java.util.Scanner; public class Demo02_GetEmailName { public static void main(String[] args) { System.out.print("Please enter email address:"); String email = new Scanner(System.in).nextLine(); System.out.println(new Demo01_GetEmailName().getName(email)); * @Description Get the mailbox name. The return value is the mailbox name and the return value is String * @Date 19:35 2021/9/2 * @Param [email] * @return java.lang.String public String getName(String email) { int index = email.indexOf("@"); if(index == -1) { return "Email format error!"; return "Name: " + email.substring(0, index); 9.2 operation results 10. Absolute value sorting Title: This is also a topic of Hangzhou Electric OJ. Enter n (n < = 100) integers and sort them from large to small according to the absolute value. The input requirement is that there are multiple groups of input data, each group occupies one line. The first number of each line is n, followed by N integers. n=0 indicates the end of input data, and no processing is required. For each test instance, the sorted results are output, and the two numbers are separated by a space. Each test instance occupies one line. The input requirement is to output the sorted results for each test instance, separated by a space between the two numbers. Each test instance occupies one line. Solution idea: this problem is a water problem, which mainly uses the abs() method under the Math class library, and then sort the array of values. 10.1 core code import java.util.Scanner; import static java.lang.Math.*; public class Demo03_AbsoluteValueSort { public static void main(String[] args) { Scanner sc = new Scanner(System.in); while (true) { int n = sc.nextInt(); if (n == 0) { //Enter '0' to terminate the program int[] numArr = new int[n]; for (int i = 0; i < numArr.length; i++) { //input data numArr[i] = sc.nextInt(); new Demo03_AbsoluteValueSort().absSort(numArr); //Sort the array by absolute value for (int i = 0; i < numArr.length-1; i++) { System.out.print(numArr[i] + " "); * @Description Bubble sorting is used to sort absolute values * @Date 20:13 2021/9/2 * @Param [array] * @return void public void absSort(int[] array) { for (int i = 1; i < array.length; i++) { for (int j = 0; j < array.length-i; j++) { if(abs(array[j]) < abs(array[j+1])) { int temp = array[j]; array[j] = array[j+1]; array[j+1] = temp; 10.2 operation results
{"url":"https://www.fatalerrors.org/a/basic-java-algorithms.html","timestamp":"2024-11-10T21:31:50Z","content_type":"text/html","content_length":"32487","record_id":"<urn:uuid:7a8aa0b4-8379-4558-a1ff-4a2f37f836f9>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00392.warc.gz"}
simultaneous linear equations + matlab code Author Message xmimvili Posted: Monday 25th of Dec 09:03 Hi there I have almost taken the decision to look fora algebra tutor , because I've been having a lot of stress due to math homework this year. Every day when I come home from school I spend hours and hours with my algebra homework, and after all the time spent I still seem to be getting the incorrect answers. However I'm also not certain whether a algebra private teacher is worth it, since it's really expensive, and who knows, maybe it's not even that good . Does anyone know anything about simultaneous linear equations + matlab code that can help me? Or maybe some explanations about function domain,solving a triangle or subtracting exponents? Any ideas will be much appreciated . AllejHat Posted: Tuesday 26th of Dec 07:32 Hello friend , simultaneous linear equations + matlab code can be really difficult if your basics are not clear. I know this program , Algebra Master which has helped a lot of novice clear their concepts. I have used this software a couple of times when I was in college and I recommend it to every novice . erx Posted: Wednesday 27th of Dec 18:36 Hi , even I made use of Algebra Master to learn more about simultaneous linear equations + matlab code. This was just a remarkable tool that aided me with all the basic principles. I would recommend you to use this before resorting to the assistance from coaches, which is often very pricey. From: PL/DE Vament Posted: Thursday 28th of Dec 16:19 Hi all , Thank you very much for all your guidance . I shall surely give Algebra Master at https://algebra-test.com/guarantee.html a try and would keep you posted with my experience. The only thing I am very specific about is the fact that the tool should offer required assistance on Algebra 2 which in turn would help me to complete my homework before the deadline . MoonBuggy Posted: Saturday 30th of Dec 07:15 I am a regular user of Algebra Master. It not only helps me get my assignments faster, the detailed explanations provided makes understanding the concepts easier. I strongly suggest using it to help improve problem solving skills. Leeds, UK Mov Posted: Saturday 30th of Dec 09:23 I’m glad you’re really interested to use this program . It’s the best software I ever used and I really don’t want you to miss using the program . Try visiting https://algebra-test.com/ company.html. Good luck with your test my friend !
{"url":"http://algebra-test.com/algebra-help/powers/simultaneous-linear-equations.html","timestamp":"2024-11-08T01:54:00Z","content_type":"application/xhtml+xml","content_length":"22189","record_id":"<urn:uuid:b42018fa-6fe7-4b86-8251-17490fd8f747>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00208.warc.gz"}
Left Brain Questions - Harmony Consulting, LLC What is “Data Sanity?” Data Sanity means: • Seeing statistics as the art and science of collecting and analyzing data simply and efficiently in the context of process-oriented thinking • Using data as a basis for meaningful actions rather than collecting it for “museum purposes” • Stopping wasteful meeting where data tables, variance reports, bar graphs, trend lines, and traffic light data displays are routinely used to make important organizational decisions Armies of employees do nothing but collect, summarize, and report data. Armies of managers and technical professionals spend time reviewing these data and attempting to pull out something meaningful from the mass of charts they receive each week. Managers demand accountability for achieving goals and ask why numbers have gotten larger or smaller. Sound familiar? How can data be a source of waste? Some research and experience of Marc Graham Brown has shown four potential immediate benefits from taking an organizational commitment to “data sanity” seriously: 1. A more than 50 percent reduction in the amount of time spent in monthly senior management meetings; 2. The elimination of up to an hour each day spent by managers reviewing and attempting to interpret unimportant performance data; 3. An 80 percent reduction in the volume of reports generated on a monthly basis by a corporate finance function; and 4. A 60 percent reduction in the pounds of reports that were printed each day, reporting performance data. Coming up with a good solid set of metrics and actually using it to manage will save thousands of hours of time wasted reviewing charts and graphs in meetings and reading reports on statistics that do not really matter. If you figure two hours per week per technical or managerial employee times 48 weeks times the labor rate per hour, you’re talking a lot of money. What are the basics of collecting data? Three Data Themes 1. Collect meaningful data (a basis for ACTION) 2. Collect data over time 3. Use data to identify root causes of problems The Eight Questions of Planning for Data Collection 1. Why collect the data? [Objective] 2. What methods will be used for the analysis? [Note: Before ONE piece is collected] 3. What data will be collected? [What process characteristic is important?] 4. How will the data be measured? [Do people agree on how to obtain the number?] 5. When will the data be collected? [How often will it be collected?] [6 – 8 deal with the logistical process of actually collecting the data] 6. Where will the data be collected? 7. Who will collect the data? 8. What training is needed for the data collectors? How does “statistical thinking” differ from the “Statistics from Hell 101” I was taught in the past? There are three kinds of statistics. Using the contexts of manufacturing / health care / education: Descriptive statistics: “What can I say about this individual widget / patient / student?” Enumerative statistics: “What can I say about this specific group of widgets / patients / students?” Analytic statistics: “What can I say about the process that produced both this group (sample) of widgets / patients / students and its results?” Most basic required university courses teach statistics from an enumerative perspective – no concept of data either as a process or being produced by a process. A vital component in analytic statistics is assessing the process that produced any data, which is not generally taught. Unstable processes (the rule in everyday work) actually invalidate the application of many traditional techniques. The good news is that an analytic perspective emphasizes simply “plotting the dots” (in their naturally occurring time order), causing improved, deeper conversations about data. This is demonstrated vividly in the next example. Why is plotting the dots so powerful? The following data table is handed out at a typical six-month “How’re we doin’?” meeting comparing three hospitals average length of stay. It summarizes the last 30 individual weeks of performance: Tr SE Variable Mean Median StDev Min Max Q1 Q3 Mean Mean LOS_1 3.027 2.90 3.046 0.978 0.178 1.0 4.8 2.300 3.825 LOS_2 3.073 3.10 3.069 0.668 0.122 1.9 4.3 2.575 3.500 LOS_3 3.127 3.25 3.169 0.817 0.149 1.1 4.5 2.575 3.750 What questions should you now ask? What luck! You are “blessed”—A Six Sigma Black Belt is in your midst, loads the actual data onto the computer, and concludes: “All three hospitals’ data pass the test for normally. Of course, we have to be cautious. Just because the data passes the test for normality doesn’t necessarily mean that the data are normally distributed…only that, under the null hypothesis, the data cannot be proven to be non-normal.” Got that? “Since the data can be assumed Normally distributed, one can proceed with the analysis of variance (ANOVA) to generate the 95% confidence intervals.” One-Way Analysis of Variance “The p-Value > 0.05: Therefore, there are no statistically significant differences as further confirmed by the overlapping 95% confidence intervals.” What questions should you now ask? WAIT A MINUTE! • How was these data collected? • Was all this analysis even appropriate? • What about the process(es) that produced these data? Since these data are in time order by week, what would a simple plot of the numbers in their naturally occurring time order look like? What questions would you now ask now? Which analysis would lead to the more meaningful conversation? [Did you know that all of the previous “statistical analysis” was worthless?]
{"url":"https://davisdatasanity.com/faq-click-on-the-brain/left-brain/","timestamp":"2024-11-02T21:11:00Z","content_type":"application/xhtml+xml","content_length":"51369","record_id":"<urn:uuid:e3a0a925-23f1-4db0-bcc3-cccf41e21a6a>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00389.warc.gz"}
Materials Sciences and Applications Vol.4 No.1(2013), Article ID:27077,7 pages DOI:10.4236/msa.2013.41008 First Principles Study of Structural and Electronic Properties of O[x]S[1−x]Zn Ternary Alloy ^ ^ ^ ^ ^1Department of Physics, University of Djillali Liabes, Sidi Bel-Abbes, Algeria; ^2Institute of Nono Electronic Engineering, University Malaysia Perlis, Kangar, Malaysia; ^3Modeling and Simulation Materials Sciences Laboratory, Sidi-Bel-Abbès University, Sidi Bel-Abbes, Algeria. Email: ^*lttnsameri@yahoo.fr Received September 6^th, 2012; revised October 12^th, 2012; accepted November 2^nd, 2012 Keywords: FP-LMTO; Ab-Initio; Approach of Zunger; Effective Mass We perform self-consistent ab-initio calculations to study the structural and electronic properties of zinc blende ZnS, ZnO and their alloy. The full-potential muffin-tin orbitals (FP-LMTO) method was employed within density functional theory (DFT) based on local density Approximation (LDA), and generalized gradient approximation (GGA). We analyze composition effect on lattice constants, bulk modulus, band gap and effective mass of the electron. Using the approach of Zunger and coworkers, the microscopic origins of band gap bowing have been detailed and explained. Discussions will be given in comparison with results obtained with other available theoretical and experimental results. 1. Introduction The II-VI compound semiconductors have recently received considerable interest a lot of experimental and theoretical work on this alloy’s, they were promoted much interest because of their numerous applications in optoelectronic devices such as visual displays, high-density optical memories, transparent conductors, solid-state laser devices, photodetectors, solar cells, etc. for The organization of this paper is as follows: we explain the FP-LMTO computational method in Section 2. In Section 3, the results and discussion for structural and electronic properties are presented. Finally, a conclusion is given in Section 4. 2. Computational Methods The calculations reported in this work were carried out by FP-LMTO [4,5] within the density functional theory DFT based on local density Approximation LDA [6] and generalized gradient approximation GGA [7]. In this method the space is divided into an interstitial region (IR) and non-overlapping muffin-tin (MT) spheres centered at the atomic sites. In the IR region, the basis functions are represented by Fourier series. Inside the muffin-tin spheres, the basis sets is described by radial solutions of the one-particle Schrödinger equation (at fixed energy) and their energy derivatives multiplied by spherical harmonics. The valence wave functions inside the spheres are expanded up to l[max] = 6. The k integration over the Brillouin zone is performed using the tetrahedron method [8]. The values of the sphere radii (MTS) and the number of plane waves (NPLW) used in our calculation are listed in Table 1. 3. Results and Discussion 3.1. Structural Properties The calculations were firstly carried out to determine the structural properties of ZB binary compounds ZnS and ZnO and Table 2, we summarize the calculated modulus and their pressure derivatives) of ZnS, ZnO compounds and Table 1. The plane wave number PW, energy cuttof (in Ry) and the muffin-tin radius (RMT) (in a.u.) used in calculation for binary ZnS and ZnO and their alloy in zinc blende (ZB) structure. ^aRef. [16], ^bRef. [20]. Table 2. Lattice constants a, bulk modulus B, and pressure derivative of bulk modulus B, for ZB ZnS, ZnO and O[x]S[1−x]Zn solid solutions. ^aRef. [11], ^bRef. [12], ^cRef. [13], ^dRef. [14], ^eRef. [15], ^fRef. [16], ^gRef. [17], ^hRef. [18], ^iRef. [19], ^jRef. [20], ^kRef. [21], ^lRef. [22], ^mRef. [23], ^nRef. [24]. where a[AC] and a[BC] are the equilibrium lattice constants of the binary compounds AC and BC Hence, the lattice constant can be written as: where the quadratic term b is the bowing parameter. Figures 1 and 2 show the variation of the calculated equilibrium lattice constant and the bulk modulus versus concentration for 3.2. Electronic Properties The calculations of the electronic band structure properties, magnitude of band-gap were carried out for ZnS, ZnO and Table 3 for the high-symmetry Figure 1. Composition dependence of the calculated lattice constants using the GGA (solid squares) and LDA (solid circles) for Figure 2. Composition dependence of the calculated bulk modulus using the LDA (solid squares) and GGA (solid circles) of points Γ and X in the Brillouin zone. All energies are with reference to the top of the valence band at Γ point. The results show that ZnS and ZnO compound is a direct-gap semiconductor with the minimum of conduction band at Γ point.The calculated GGA (LDA) energy gaps of ZnS and ZnO Eg are 1.97(2.12) eV and 0.69(0.79) eV, respectively, which are in good agreement with the theoretical values as listed in Table 3. Focusing now on the electronic properties of the Figure 3 presents the variation of the direct and indirect band gap energies as functions of the composition x for the ternary alloys. We note direct (Γ-Γ) and indirect (Γ-X) band gap not intersect because the computed band structures of the alloys using both LDA and GGA schemes indicate a direct band gap at various Indeed it is a general trend to describe the bandgap of an alloy A[x]B[1−x]C in terms of the pure compound energy gap E[AC] and E[BC] by the sem-empirical formula: where E[AC] and E[BC] corresponds to the gap of the ZnO and ZnS for the O[x]S[1][−][x]Zn alloy. The calculated band gap versus concentration was fitted by a polynomial equation. The results are shown in Figure 3 and are summarized as follows In order to better understand the physical origins of the large and composition-dependent bowing in Figure 3. Direct and indirect band gap energies as a function of O composition using the LDA and GGA of Table 3. Direct (Γ–Γ) and indirect (Γ–X) band gaps of ZnS and ZnO and their alloy at equilibrium volume (The energy is given in eV). ^aRef. [25], ^bRef. [14], ^cRef. [15], ^dRef. [26], ^eRef. [16], ^fRef. [13], ^gRef. [18], ^hRef. [27], ^iRef. [19], ^jRef. [28]. coefficient at a given average composition x measures the change in band gap according to the formal reaction. where a[AC] and a[BC] are the equilibrium lattice constants of the binary compounds. which a[eq] is the equilibrium lattice constant of the alloy with the average composition x. We decompose reaction into three step: The first contribution, the volume deformation (b[VD]) represents the relative response of the band structure of the binary compounds AC and BC to hydrostatic pressure. The second contribution, the charge-exchange (CE) contribution b[CE], reflects a charge-transfer effect that is due to the different (averaged) bonding behavior at the lattice constant a. The final step measures by b[SR], changes due to the structural relaxation (SR) in passing from the unrelaxed to the relaxed alloy. Consequently, the total gap bowing parameter is defined as The general representation of the composition-dependent band gap of the alloys in terms of binary compounds gaps of the, The addition of the three contributions (11), (12), and (13) leads to the total bowing parameter b. The computed bowing coefficients b together with the three different contributions for the band gaps as a function of the molar fraction (x = 0.25, 0.5 and 0.75) are shown in the Table 4. The calculated band gap bowing parameter exhibits a strong composition dependence, as calculated within the GGA and LDA calculations, which show a weakly composition dependent bowing parameter. The variation of the band gap bowing versus concentration shown in Figure 4. The bowing remains linear and Decreases rapidly from x = 0.25 to x = 0.75 in both approaches (GGA and LDA). The calculated GGA and LDA gap bowing for [CE] has been found greater than Table 4. Decomposition of optical bowing into VD, CE and SR contributions compared with other prediction. Figure 4. Composition dependence of the calculated band gap bowing parameter using LDA (Solid Square) and GGA (Solid Circle). Table 5. Electron ( ^aRef. [29], ^bRef. [14], ^cRef. [13],^dRef. [27], ^eRef. [15]. 3.3. Effective Masses It is also interesting to discuss at the end of the band structure study the effective masses of electrons and holes, which are important for the excitonic compounds. We have calculated the effective masses of electrons and holes using both LDA and GGA schemes are mentioned in Table 5. A theoretical effective mass in general turns out to be a tensor with nine components. However, for a very idealized simple case where E(k 4. Conclusion In summary, we have studied the electronic, structural properties of 1. K. Iwata, P. Fons, A. Yamada, H. Shibata, K. Matsubara, K. Nakahara, T. Takasu and S. Niki, “Bandgap Engineering of ZnO Using Se,” Physica Status Solidi (b), Vol. 229, No. 2, 2002, pp. 887-890. 2. P. Hohenberg and W. Kohn, “Inhomogeneous Electron Gas,” Physical Review, Vol. 136, No. 3B, 1964, pp. B864- B871. doi:10.1103/PhysRev.136.B864 3. W. Kohn and L. J. Sham, “Self-Consistent Equations Including Exchange and Correlation Effects,” Physical Review A, Vol. 140, No. 4A, 1965, pp. A1133-A1138. 4. S. Savrasov and D. Savrasov, “Full-Potential Linear-Muffin-Tin-Orbital Method for Calculating Total Energies and Forces,” Physical Review B, Vol. 46, No. 19, 1992, pp. 12181-12195. doi:10.1103/ 5. S. Y. Savrasov, “Linear-Response Theory and Lattice Dynamics: A Muffin-Tin-Orbital Approach,” Physical Review B, Vol. 54, No. 23, 1996, pp. 16470-16486. doi:10.1103/PhysRevB.54.16470 6. J. P. Perdew and Y. Wang, “Accurate and Simple Analytic Representation of the Electron-Gas Correlation Energy,” Physical Review B, Vol. 45, No. 13, 1992, pp. 13244-13249. doi:10.1103/ 7. J. P. Perdew, S. Burke and M. Ernzerhof, “Generalized Gradient Approximation Made Simple,” Physical Review Letters, Vol. 77, No. 18, 1996, pp. 3865-3868. doi:10.1103/PhysRevLett.77.3865 8. P. Blochl, O. Jepsen and O. K. Andersen, “Improved Tetrahedron Method for Brillouin-Zone Integrations,” Physical Review B, Vol. 49, No. 23, 1994, pp. 16223-16233. doi:10.1103/PhysRevB.49.16223 9. F. D. Murnaghan, “The Compressibility of Media under Extreme Pressures,” Proceedings of the National Academy of Sciences USA, Vol. 30, No. 9, 1944, pp. 244-247. doi:10.1073/pnas.30.9.244 10. A. Mokhtari and H. Akbarzadeh, “Electronic and Structural Properties of β-Be3N2,” Physica B: Condensed Matter, Vol. 324, No. 1-4, 2002, pp. 305-311. doi:10.1016/S0921-4526(02)01416-3 11. O. Madelung, Ed., “Londolt-Bornstein,” New Series III, Vol. 22, Springer, Berlin, 1987. 12. W. H. Bragg and J. A. Darbyshire, Joint Management Entrance Test, Vol. 6, 1954, p. 238. 13. Z. Charifi, F. El Haj Hassan, H. Baaziz, S. Khosravizadeh, S. J. Hashemifar and H. Akbarzadeh, “Structural and Electronic Properties of the Wide-Gap Zn[1−x]Mg[x]S, Zn[1−x]Mg[x]Se and Zn[1−x]Mg[x] Te Ternary Alloys,” Journal of Physics: Condensed Matter, Vol. 17, No. 44, 2005, pp. 7077-7088. doi:10.1088/0953-8984/17/44/001 14. M. Ameri, D. Rached, M. Rabah, F. El Haj Hassan, R. Khenata and M. Doui-Aici, “First Principles Study of Structural and Electronic Properties of Be[x]Zn[1–x]S and Be[x]Zn[1–x]Te Alloys,” Physica Status Solidi (B), Vol. 245, No. 1, 2006, pp. 106-113. doi:10.1002/pssb.200743128 15. H. Baaziz, Z. Charifi, F. El Haj Hassan, S. J. Hashemifar, and H. Akbarzadeh, “FP-LAPW Investigations of Zn[1-x]Be[x]S, Zn[1-x]Be[x]Se and Zn[1-x]Be[x]Te Ternary Alloys,” Physica Status Solidi (B), Vol. 243, No. 6, 2006, p. 1296. 16. Y. Z. Zhu, G. D. Chen and H. G. Ye, “Electronic Structure and Phase Stability of MgO, ZnO, CdO, and Related Ternary Alloys,” Physical Review B, Vol. 77, No. 24, 2008. doi:10.1103/ 17. D. Maouche, F. S. Saoud and L. Louail, “Dependence of Structural Properties of ZnO on High Pressure,” Materials Chemistry and Physics, Vol. 106, No. 1, 2007, pp. 11- 15. doi:10.1016/ 18. A. S. Mohammadi, S. M. Baizaee and H. Salehi, “Density Functional Approach to Study Electronic Structure of ZnO Single Crystal,” World Applied Sciences Journal, Vol. 14, No. 10, 2011, pp. 19. H.-L. Shi and Y. Duan, “Band-Gap Bowing and P-Type Doping of (Zn, Mg, Be)O Wide-Gap Semiconductor Alloys: A First-Principles Study,” The European Physical Journal B , Vol. 66, No. 4, 2008, pp. 439-444. doi:10.1140/epjb/e2008-00448-6 20. B. Amrani, I. Chiboub, S. Hiadsi, T. Benmessabih and N. Hamdadou, “Structural and Electronic Properties of ZnO under High Pressures,” Solid State Communications, Vol. 137, No. 7, 2006, pp. 395-399. doi:10.1016/j.ssc.2005.12.020 21. O. Madelung, “Semiconductor: Data Handbook,” 3rd Edition, Springer, New York, 2003. 22. S.-K. Kim, S.-Y. Jeong and C.-R. Cho, “Structural Reconstruction of Hexagonal to Cubic ZnO Films on Pt/Ti/SiO2/Si Substrate by Annealing,” Applied Physics Letters, Vol. 82, No. 4, 2003, p. 562. 23. A. Ashrafi and C. Jagadish, “Review of Zincblende ZnO: Stability of Metastable ZnO Phases,” Journal of Applied Physics, Vol. 102, No. 7, 2007, p. 71101. doi:10.1063/1.2787957 24. H. Y. Wang, J. Cao, X. Y. Huang and J. M. Huang, “Pressure Dependence of Elastic and Dynamicalproperties of Zinc-Blende ZnS and ZnSefrom First Principle Calculation,” Condensed Matter Physics, Vol. 15, No 1, 2012, pp. 1-10. 25. H. Okuyama, Y. Kishita and A. Ishibashi, “Quaternary Alloy Zn[1-x]Mg[x]S[y]Se[1-y],” Physical Review B, Vol. 57, No. 4, 1998, pp. 2257-2263. doi:10.1103/PhysRevB.57.2257 26. S.-G. Lee and K. J. Chang, “First-Principles Study of the Structural Properties of MgS-, MgSe-, ZnS-, and ZnSeBased Superlattices,” Physical Review B, Vol. 52, No. 3, 1995, pp. 1918-1925. 27. S. Zh. Karazhanov, P. Ravindrana, A. Kjekhus, H. Fjellvag, U. Grossner and B. G. Svensson, “Electronic Structure and Band Parameters for ZnX (X = O, S, Se, Te),” Journal of Crystal Growth, Vol. 287, No. 1, 2006, pp. 162-168. doi:10.1016/j.jcrysgro.2005.10.061 28. M. Oshikiri and F. Aryasetiawan, “Band Gaps and Quasiparticle Energy Calculations on ZnO, ZnS, and ZnSe in the Zinc-Blende Structure by the GW Approximation,” Physical Review B, Vol. 60, No. 15, 1999, pp. 10754- 10757. doi:10.1103/PhysRevB.60.10754 29. H. Kukimoto, S. Shionoya, T. Koda and T. Hioki, “Infrared Absorption Due to Donor States in ZnS Crystals,” Journal of Physics and Chemistry of Solids, Vol. 29, No. 6, 1968, pp. 935-944. 30. L. Vegard, “Formation of Mixed Crystals by Solid-Phase Contact,” Journal of Physics, Vol. 5, No. 5, 1921, pp. 393-395. 31. J. E. Bernard and A. Zunger, “Optical Bowing in Zinc Chalcogenide Semiconductor Alloys,” Physical Review B, Vol. 34, No. 8, 1986, pp. 5992-5995. doi:10.1103/PhysRevB.34.5992 ^*Corresponding author.
{"url":"https://file.scirp.org/Html/8-7700872_27077.htm","timestamp":"2024-11-01T20:36:02Z","content_type":"application/xhtml+xml","content_length":"49985","record_id":"<urn:uuid:dada6e0c-b3f6-418c-b051-07125ad9e365>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00897.warc.gz"}
Lesson 3 Attributes that Define Shapes Warm-up: Number Talk: Multiply Multiples of Ten (10 minutes) This Number Talk prompts students to use place value and properties of operations to multiply single-digit numbers by multiples of ten. The strategies elicited here help students develop fluency. • Display one expression. • “Give me a signal when you have an answer and can explain how you got it.” • 1 minute: quiet think time • Record answers and strategy. • Keep expressions and work displayed. • Repeat with each expression. Student Facing Find the value of each expression mentally. • \(4\times40\) • \(8\times40\) • \(7\times40\) • \(9\times40\) Activity Synthesis • “How were you able to use \(4\times40\) to find the other products.” (I doubled \(4\times40\) to find \(8\times40\). Once I knew \(4\times 40\), I could count up or down by 40 to get the other • Consider asking: “Why might it be helpful to first think of each expressions as multiplying a number and 4 instead of 40?” (We know many multiplication facts for 4, and that 40 is 4 tens. We can think of each problem as groups of 4 tens instead of groups of 40.) Activity 1: Learn How to Play Mystery Quadrilateral (10 minutes) The purpose of this activity is to introduce the game Mystery Quadrilateral and strategically consider the questions that could be asked next to determine a mystery quadrilateral. Students play a round of this game against the teacher. In the next activity, students will play this game in groups of 2. Required Preparation • Gather a set of quadrilateral cards from the previous lesson. • “We are going to play a game called Mystery Quadrilateral. Read the directions independently." • 1 minute: quiet think time • “Now let’s play a round together. I’ll be Partner A and the class will be Partner B.” • Play a round of the game. Student Facing Play a round of Mystery Quadrilateral with your teacher. 1. Partner A: Choose a shape from the group of quadrilaterals. Place it in the mystery quadrilateral folder without your partner seeing it. 2. Partner B: Ask up to 5 “yes” or “no” questions to identify the quadrilateral. Then guess which quadrilateral is the mystery quadrilateral. 3. Partner A: Show your partner the mystery quadrilateral. 4. Switch roles and play again. Activity Synthesis • “What kinds of questions might help you figure out the mystery quadrilateral?” (Questions about something we didn’t already know about the shape. General questions to narrow down the type of quadrilateral, then more-specific questions to figure out which one.) Activity 2: Play Mystery Quadrilateral (25 minutes) The purpose of this activity is for students to practice describing geometric attributes of a quadrilateral with increasing precision by playing a game. Students should be encouraged to ask questions like, “Are all the sides the same length?” rather than, “Is it a square?” to keep the focus on attributes of the quadrilateral rather than the name. As students decide which questions to ask they think about important attributes such as side lengths and angles and have an opportunity to use language precisely (MP6, MP7). Students will use the quadrilaterals from the previous lesson to hide in the “mystery quadrilateral” folder, but will have a copy of all the shapes in their workbook to support them in asking questions to narrow down the shape. Students can also cover shapes in their workbook with counters as they rule out shapes. MLR8 Discussion Supports. Synthesis: Think aloud and use gestures to emphasize the attributes that students use to describe the shapes. For example, trace your finger along the angles and sides of the shape as students describe them. Advances: Listening, Representing Engagement: Develop Effort and Persistence. Check in and provide each group with feedback that encourages collaboration and community. Supports accessibility for: Social-Emotional Functioning Required Preparation • Each group of 2 needs a set of quadrilateral cards from the previous lesson. • Each group of 2 will need a folder to hide the card during this activity. • Groups of 2 • “Now you’re going to play Mystery Quadrilateral with your partner. Re-read the directions for the game, then think about some words that may be helpful as you play.” (side, angle, right angle, equal, skinny, tall, slanted) • 1 minute: quiet think time • Share and record responses. • Give each group a folder containing a set of the quadrilateral cards from the previous lesson. • “How could you use the images of all the quadrilaterals on your paper as you play?” (They can help me think about questions I could ask. I could mark off quadrilaterals as I figure out that they're not the mystery quadrilateral.) • Give students access to counters and let them know they can be used to cover shapes they want to cross out during the game. • “Play Mystery Quadrilateral with your partner. Be sure to take turns hiding the shape and guessing the shape.” • 10–15 minutes: partner work time Student Facing 1. Partner A: Choose a shape from the group of quadrilaterals. Place it in the mystery quadrilateral folder without your partner seeing it. 2. Partner B: Ask up to 5 “yes” or “no” questions to identify the quadrilateral. Then guess which quadrilateral is the mystery quadrilateral. 3. Partner A: Show your partner the mystery quadrilateral. 4. Switch roles and play again. Activity Synthesis • “What shapes were the easiest to figure out and why?” (W was easy because it was so different with one angle pointing in. X was easy because it was the only square resting on a side.) • “What shapes were the most challenging to ask questions about and why?” (FF was challenging because none of the sides were the same length, and it was hard to get more information with “yes” or “no” questions. M and BB were hard to tell apart because they were so similar and it was hard to figure out what questions to ask.) Lesson Synthesis Display cards S, U, and X. “Here are some quadrilaterals that are the same in some ways. What attributes would you use to describe how they’re different?” (I would focus on the number of sides that are the same length. I would focus on the number of right angles they have.) Cool-down: Mystery Shape (5 minutes)
{"url":"https://curriculum.illustrativemathematics.org/k5/teachers/grade-3/unit-7/lesson-3/lesson.html","timestamp":"2024-11-03T07:32:44Z","content_type":"text/html","content_length":"84381","record_id":"<urn:uuid:ac10454d-57c9-47b9-beec-eca0f6ef0315>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00560.warc.gz"}
Calculating Shear Diameter from Tensile Properties in context of shear diameter 30 Aug 2024 Calculating Shear Diameter from Tensile Properties: A Comprehensive Guide Shear diameter is a critical parameter in the analysis of materials’ mechanical behavior, particularly in the context of shear testing. In this article, we will delve into the concept of shear diameter and explore how it can be calculated using tensile properties. What is Shear Diameter? Shear diameter (d) is a measure of a material’s resistance to shear deformation. It represents the diameter of a circular specimen that would experience the same amount of shear strain as a rectangular specimen with a given width and length, subjected to a specific shear stress. In other words, it is a way to normalize the shear behavior of different materials. Calculating Shear Diameter from Tensile Properties The calculation of shear diameter from tensile properties involves using the following formula: d = (4 * G * L) / (τ * t) • d = shear diameter • G = shear modulus (Pa) • L = length of the specimen (m) • τ = shear stress (Pa) • t = thickness of the specimen (m) To calculate the shear modulus (G), you can use the following formula, which is derived from the tensile properties: G = E / (2 * (1 + ν)) • G = shear modulus (Pa) • E = Young’s modulus (Pa) • ν = Poisson’s ratio (dimensionless) Example Calculation Let’s consider a material with the following tensile properties: • Young’s modulus (E) = 70,000 MPa • Poisson’s ratio (ν) = 0.3 Using the formula for shear modulus (G), we can calculate G as follows: G = E / (2 * (1 + ν)) = 70,000 MPa / (2 * (1 + 0.3)) ≈ 25,000 Pa Now, let’s assume a rectangular specimen with dimensions L = 10 mm and t = 1 mm, subjected to a shear stress τ = 50 MPa. Using the formula for calculating shear diameter (d), we can calculate d as follows: d = (4 * G * L) / (τ * t) ≈ (4 * 25,000 Pa * 10 mm) / (50 MPa * 1 mm) ≈ 2.0 mm In this article, we have explored the concept of shear diameter and demonstrated how it can be calculated using tensile properties. The formula for calculating shear diameter involves the shear modulus, length, thickness, and shear stress of a material. By understanding the relationship between these parameters, engineers and researchers can better analyze the mechanical behavior of materials under various loading conditions. • ASTM D5379: Standard Test Method for Shear Properties of Plastics by Photoelastic Modulation • ISO 527-2: Plastics - Determination of tensile properties - Part 2: Test conditions for isotropic plastics • ASME/ANSI B93.1M: Standard Terminology Relating to Testing and Materials Related articles for ‘shear diameter’ : • Reading: Calculating Shear Diameter from Tensile Properties in context of shear diameter Calculators for ‘shear diameter’
{"url":"https://blog.truegeometry.com/tutorials/education/7ce604204de72259dcf0654ac74dd5c8/JSON_TO_ARTCL_Calculating_Shear_Diameter_from_Tensile_Properties_in_context_of_s.html","timestamp":"2024-11-10T17:26:59Z","content_type":"text/html","content_length":"16522","record_id":"<urn:uuid:cf53e448-9dd0-4ec0-b9dc-1917bb094ee1>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00234.warc.gz"}
Computation and Analysis of First Integrals for Two-Dimensional Integrable Hamiltonian Systems Computation and Analysis of First Integrals for Two-Dimensional Integrable Hamiltonian Systems Core Concepts This paper discusses the computation of first integrals, also known as conserved quantities or constants of motion, for various two-dimensional integrable Hamiltonian systems, including the two-dimensional harmonic oscillator, the classical Landau problem with a hyperbolic mode, the two-dimensional Kepler problem, and a problem involving a linear curl force. The paper begins by revisiting the well-studied problem of the two-dimensional harmonic oscillator and discusses its (super)integrability in the light of a canonical transformation that can map the anisotropic oscillator to a corresponding isotropic one. It then explores the computation of first integrals for integrable two-dimensional systems using the framework of the Jacobi last multiplier. For the two-dimensional harmonic oscillator, the paper presents the conserved quantities and discusses the subtleties that arise when the frequencies are commensurable or incommensurable. It shows that in the case of commensurable frequencies, the anisotropic oscillator is superintegrable in the Liouville sense, as it has three independent globally-defined first integrals. The paper then applies the last-multiplier formalism to compute additional first integrals for three novel physical examples: the classical Landau problem with a scalar-potential-induced hyperbolic mode, the two-dimensional Kepler problem, and a problem involving a linear curl force. In each case, the paper derives the conserved quantities and provides their physical interpretations. Translate Source To Another Language Generate MindMap from source content First integrals of some two-dimensional integrable Hamiltonian systems "The Hamiltonian for the two-dimensional harmonic oscillator is given by: H = 1/2 * (ω1^2 * (p1^2 + q1^2) + ω2^2 * (p2^2 + q2^2)) The Hamiltonian for the classical Landau problem with a hyperbolic mode is given by: H = (p_x^2)/2m + ((p_y - mω_c x)^2)/2m + eλxy The Hamiltonian for the two-dimensional Kepler problem is given by: H = p_r^2/2 + p_ψ^2/(2r^2) - k/r" "If ω1/ω2 is a rational number, i.e., the frequencies are commensurable, then the trajectories are periodic and are closed; every invariant torus is a union of periodic orbits which implies that it is foliated by invariant circles. It then makes sense to have three functionally-independent first integrals on the phase space." "If ω1/ω2 is an irrational number, i.e., the frequencies are incommensurable, then the trajectories in the phase space are only quasi-periodic and are not closed; any trajectory densely fills an invariant torus meaning that there cannot be three functionally-independent first integrals which are defined globally on a trajectory." Deeper Inquiries How can the methods presented in this paper be extended to study the integrability and conserved quantities of higher-dimensional Hamiltonian systems? The methods discussed in the paper, particularly the Jacobi last multiplier formalism and the analysis of conserved quantities, can be extended to higher-dimensional Hamiltonian systems by leveraging the structure of the phase space and the properties of symplectic geometry. In higher dimensions, the phase space is characterized by a dimension of (2n) for (n)-dimensional systems, necessitating the identification of (n) functionally independent conserved quantities for Liouville integrability. To extend the analysis, one can generalize the canonical transformations used in the two-dimensional cases to higher dimensions, ensuring that the transformations preserve the symplectic structure. This involves constructing appropriate coordinate systems that facilitate the separation of variables in the Hamiltonian, similar to the approach taken for the two-dimensional harmonic oscillator. Moreover, the Jacobi last multiplier can be applied in higher dimensions by considering the divergence of the vector field associated with the Hamiltonian dynamics. The last multiplier can be computed in a manner analogous to the two-dimensional case, allowing for the derivation of additional conserved quantities. The existence of these conserved quantities can provide insights into the integrability of the system, particularly in identifying superintegrable systems where more than the minimum number of conserved quantities exist. What are the potential applications of the computed first integrals in the context of the Riemann zeroes and other areas of mathematical physics? The computed first integrals, particularly those derived from the classical Landau problem and the two-dimensional Kepler problem, have significant implications in various areas of mathematical physics, including quantum mechanics and number theory. In the context of the Riemann zeroes, the connections established between classical mechanics and complex analysis can provide a framework for understanding the distribution of prime numbers through the zeros of the Riemann zeta function. The conserved quantities obtained from the analysis of the Landau problem, for instance, can be related to the behavior of quantum systems in strong magnetic fields, which is relevant in the study of quantum Hall effects and other phenomena in condensed matter physics. Similarly, the conserved quantities from the Kepler problem can be utilized in celestial mechanics to analyze the motion of celestial bodies under gravitational influences, leading to insights into orbital dynamics and stability. Furthermore, the methods and results presented in the paper can be applied to explore integrable systems in statistical mechanics, where conserved quantities play a crucial role in understanding the equilibrium properties of many-body systems. The interplay between classical integrability and quantum mechanics can also lead to advancements in quantum integrable systems, enhancing our understanding of quantum chaos and the foundations of quantum field theory. Can the insights gained from the analysis of the linear curl force problem lead to a better understanding of the role of non-conservative forces in Hamiltonian dynamics? Yes, the insights gained from the analysis of the linear curl force problem can significantly enhance our understanding of the role of non-conservative forces in Hamiltonian dynamics. The linear curl force, characterized by a non-zero curl, introduces complexities that challenge the traditional framework of Hamiltonian mechanics, which typically assumes conservative forces derived from a potential. By examining the dynamics under linear curl forces, one can explore how these forces affect the conservation of energy and momentum, as well as the overall behavior of the system. The derived conserved quantities, such as those presented in the paper, illustrate how non-conservative forces can still yield meaningful constants of motion, albeit in a modified context. This challenges the notion that Hamiltonian systems must be strictly conservative and opens avenues for studying systems where energy is not conserved, such as in dissipative systems or systems influenced by external fields. Moreover, the analysis of curl forces can provide insights into the geometric and topological aspects of phase space, revealing how the structure of the force field influences the trajectories and stability of the system. This understanding can be applied to various physical scenarios, including fluid dynamics, electromagnetism, and even in the study of chaotic systems, where non-conservative forces play a pivotal role in the dynamics. Overall, the exploration of non-conservative forces within the Hamiltonian framework enriches the theoretical landscape and offers practical implications across multiple disciplines in physics.
{"url":"https://linnk.ai/insight/computational-complexity/computation-and-analysis-of-first-integrals-for-two-dimensional-integrable-hamiltonian-systems-IVIXcwA1/","timestamp":"2024-11-08T22:01:19Z","content_type":"text/html","content_length":"257796","record_id":"<urn:uuid:a882c56a-815a-41c5-967b-61c69f968879>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00434.warc.gz"}
Estimate output-error polynomial model using time-domain or frequency-domain data Output-error (OE) models are a special configuration of polynomial models, having only two active polynomials—B and F. OE models represent conventional transfer functions that relate measured inputs to outputs while also including white noise as an additive output disturbance. You can estimate OE models using time- and frequency-domain data. The tfest command offers the same functionality as oe. For tfest, you specify the model orders using number of poles and zeros rather than polynomial degrees. For continuous-time estimation, tfest provides faster and more accurate results, and is Estimate OE Model sys = oe(tt,[nb nf nk]) estimates an OE model sys using the data contained in the variables of timetable tt. The software uses the first Nu variables as inputs and the next Ny variables as outputs, where Nu and Ny are determined from the dimensions of the specified polynomial orders. sys is represented by the equation Here, y(t) is the output, u(t) is the input, and e(t) is the error. The orders [nb nf nk] define the number of parameters in each component of the estimated polynomial. To select specific input and output channels from tt, use name-value syntax to set 'InputName' and 'OutputName' to the corresponding timetable variable names. sys = oe(u,y,[nb nf nk]) uses the time-domain input and output signals in the comma-separated matrices u,y. The software assumes that the data sample time is 1 second. To change the sample time, set Ts using name-value syntax. sys = oe(data,[nb nf nk]) uses the time-domain or frequency-domain data in the data object data. sys = oe(___,Name,Value) specifies model structure attributes using additional options specified by one or more name-value pair arguments. You can use this syntax with any of the previous input-argument combinations. Configure Initial Parameters sys = oe(tt,init_sys) uses the linear system init_sys to configure the initial parameterization of sys for estimation using the timetable tt. Specify Additional Estimation Options sys = oe(___,opt) estimates a polynomial model using the option set opt to specify estimation behavior. Return Estimated Initial Conditions [sys,ic] = oe(___) returns the estimated initial conditions as an initialCondition object. Use this syntax if you plan to simulate or predict the model response using the same estimation input data and then compare the response with the same estimation output data. Incorporating the initial conditions yields a better match during the first part of the simulation. Estimate OE Polynomial Model Estimate an OE polynomial from time-domain data using two methods to specify input delay. Load the estimation data. Set the orders of the B and F polynomials nb and nf. Set the input delay nk to one sample. Compute the model sys. nb = 2; nf = 2; nk = 1; sys = oe(tt1,[nb nf nk]); Compare the simulated model response with the measured output. The plot shows that the fit percentage between the simulated model and the estimation data is greater than 70%. Instead of using nk, you can also use the name-value pair argument 'InputDelay' to specify the one-sample delay. nk = 0; sys1 = oe(tt1,[nb nf nk],'InputDelay',1); The results are identical. You can view more information about the estimation by exploring the idpoly property sys.Report. ans = Status: 'Estimated using OE' Method: 'OE' InitialCondition: 'zero' Fit: [1x1 struct] Parameters: [1x1 struct] OptionsUsed: [1x1 idoptions.polyest] RandState: [1x1 struct] DataUsed: [1x1 struct] Termination: [1x1 struct] For example, find out more information about the termination conditions. ans = struct with fields: WhyStop: 'Near (local) minimum, (norm(g) < tol).' Iterations: 3 FirstOrderOptimality: 0.0708 FcnCount: 7 UpdateNorm: 1.4809e-05 LastImprovement: 5.1744e-06 The report includes information on the number of iterations and the reason the estimation stopped iterating. Estimate Continuous-Time OE Model Using Frequency Response Load the estimation data. The idfrd object data contains the continuous-time frequency response for the following model: Estimate the model. nb = 2; nf = 3; sys = oe(data,[nb nf]); Evaluate the goodness of fit. Estimate OE Model Using Regularization Estimate a high-order OE model from data collected by simulating a high-order system. Determine the regularization constants by trial and error and use the values for model estimation. Load the data. load regularizationExampleData.mat m0simdata Estimate an unregularized OE model of order 30. m1 = oe(m0simdata,[30 30 1]); Obtain a regularized OE model by determining the Lambda value using trial and error. opt = oeOptions; opt.Regularization.Lambda = 1; m2 = oe(m0simdata,[30 30 1],opt); Compare the model outputs with the estimation data. opt = compareOptions('InitialCondition','z'); The regularized model m2 produces a better fit than the unregularized model m1. Compare the variance in the model responses. bp = bodeplot(m1,m2); bp.PhaseMatchingEnabled = "on"; bp.Characteristics.ConfidenceRegion.NumberOfStandardDeviations = 3; The regularized model m2 has a reduced variance compared to the unregularized model m1. Estimate Continuous Model Using Band-Limited Discrete-Time Frequency-Domain Data Load the estimation data data and sample time Ts. load oe_data2.mat data Ts An iddata object data contains the discrete-time frequency response for the following model: View the estimation sample time Ts that you loaded. This value matches the property data.Ts. You can estimate a continuous model from data by limiting the input and output frequency bands to the Nyquist frequency. To do so, specify the estimation prefilter option 'WeightingFilter' to define a passband from 0 to 0.5*pi/Ts rad/s. The software ignores any response values with frequencies outside of that passband. opt = oeOptions('WeightingFilter',[0 0.5*pi/Ts]); Set the Ts property to 0 to treat data as continuous-time data. Estimate the continuous model. nb = 1; nf = 3; sys = oe(data,[nb nf],opt); Obtain Initial Conditions Load the data, which consists of input and output data in matrix form and the sample time. load sdata1i umat1i ymat1i Ts1i Estimate an OE polynomial model sys and return the initial conditions in ic. nb = 2; nf = 2; nk = 1; [sys,ic] = oe(umat1i,ymat1i,[nb,nf,nk],'Ts',Ts1i); ic = initialCondition with properties: A: [2x2 double] X0: [2x1 double] C: [0.9428 0.4824] Ts: 0.1000 ic is an initialCondition object that encapsulates the free response of sys, in state-space form, to the initial state vector in X0. You can incorporate ic when you simulate sys with the umat1i input signal and compare the response with the ymat1i output signal. Input Arguments tt — Timetable-based estimation data timetable | cell array of timetables. Estimation data, specified as a timetable that uses a regularly spaced time vector. tt contains variables representing input and output channels. For multiexperiment data, tt is a cell array of timetables of length Ne, where Ne is the number of experiments. The software determines the number of input and output channels to use for estimation from the dimensions of the specified polynomial orders. The input/output channel selection depends on whether the 'InputName' and 'OutputName' name-value arguments are specified. • If 'InputName' and 'OutputName' are not specified, then the software uses the first Nu variables of tt as inputs and the next Ny variables of tt as outputs. • If 'InputName' and 'OutputName' are specified, then the software uses the specified variables. The number of specified input and output names must be consistent with Nu and Ny. • For functions that can estimate a time series model, where there are no inputs, 'InputName' does not need to be specified. For more information about working with estimation data types, see Data Domains and Data Types in System Identification Toolbox. u, y — Matrix-based estimation data matrices | cell array of matrices Estimation data, specified for SISO systems as a comma-separated pair of N[s]-by-1 real-valued matrices that contain uniformly sampled input and output time-domain signal values. Here, N[s] is the number of samples. For MIMO systems, specify u,y as an input/output matrix pair with the following dimensions: • u — N[s]-by-N[u], where N[u] is the number of inputs. • y — N[s]-by-N[y], where N[y] is the number of outputs. For multiexperiment data, specify u,y as a pair of 1-by-N[e] cell arrays, where N[e] is the number of experiments. The sample times of all the experiments must match. For time series data, which contains only outputs and no inputs, specify [],y. • Matrix-based data does not support estimation from frequency-domain data. You must use a data object such as an iddata object or idfrd object (see data). For more information about working with estimation data types, see Data Domains and Data Types in System Identification Toolbox. data — Estimation data iddata object | frd object | idfrd object Estimation data, specified as an iddata object, an frd object, or an idfrd object. For time-domain estimation, data must be an iddata object containing the input and output signal values. For frequency-domain estimation, data can be one of the following: • Recorded frequency response data (frd (Control System Toolbox) or idfrd) • iddata object with properties specified as follows: □ InputData — Fourier transform of the input signal □ OutputData — Fourier transform of the output signal □ Domain — 'Frequency' Time-domain estimation data must be uniformly sampled. By default, the software sets the sample time of the model to the sample time of the estimation data. For multiexperiment data, the sample times and intersample behavior of all the experiments must match. You can compute discrete-time models from time-domain data or discrete-time frequency-domain data. Use tfest to compute continuous-time models. [nb nf nk] — OE model orders integer row vector | row vector of integer matrices OE model orders, specified as a 1-by-3 vector or a vector of integer matrices. For a system represented by where y(t) is the output, u(t) is the input, and e(t) is the error, the elements of [nb nf nk] are as follows: • nb — Order of the B(q) polynomial + 1, which is equivalent to the length of the B(q) polynomial. nb is an N[y]-by-N[u] matrix. N[y] is the number of outputs and N[u] is the number of inputs. • nf — Order of the F polynomial. nf is an N[y]-by-N[u] matrix. • nk — Input delay, expressed as the number of samples. nk is an N[y]-by-N[u] matrix. The delay appears as leading zeros of the B polynomial. For estimation using continuous-time frequency-domain data, specify only [nb nf] and omit nk. For an example, see Estimate Continuous-Time OE Model Using Frequency Response. init_sys — Linear system idpoly model | linear model | structure Linear system that configures the initial parameterization of sys, specified as an idpoly model, another linear model, or a structure. You obtain init_sys either by performing an estimation using measured data or by direct construction. If init_sys is an idpoly model of the OE structure, oe uses the parameter values of init_sys as the initial guess for estimating sys. The sample time of init_sys must match the sample time of the Use the Structure property of init_sys to configure initial guesses and constraints for B(q) and F(q). For example: • To specify an initial guess for the F(q) term of init_sys, set init_sys.Structure.F.Value as the initial guess. • To specify constraints for the B(q) term of init_sys: □ Set init_sys.Structure.B.Minimum to the minimum B(q) coefficient values. □ Set init_sys.Structure.B.Maximum to the maximum B(q) coefficient values. □ Set init_sys.Structure.B.Free to indicate which B(q) coefficients are free for estimation. If init_sys is not a polynomial model of the OE structure, the software first converts init_sys to an OE structure model. oe uses the parameters of the resulting model as the initial guess for estimating sys. If you do not specify opt and init_sys was obtained by estimation, then the software uses estimation options from init_sys.Report.OptionsUsed. opt — Estimation options oeOptions option set Estimation options, specified as an oeOptions option set. Options specified by opt include: • Estimation objective • Handling of initial conditions • Numerical search method and the associated options For examples of specifying estimation options, see Estimate Continuous Model Using Band-Limited Discrete-Time Frequency-Domain Data. Name-Value Arguments Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter. Before R2021a, use commas to separate each name and value, and enclose Name in quotes. Example: 'InputDelay',1 InputName — Input channel names string | character vector | string array | cell array of character vectors Input channel names, specified as a string, character vector, string array, or cell array of character vectors. If you are using a timetable for the data source, the names in InputName must be a subset of the timetable variables. Example: sys = oe(tt,__,'InputName',["u1" "u2"]) selects the variables u1 and u2 as the input channels from the timetable tt to use for the estimation. OutputName — Output channel names string | character vector | string array | cell array of character vectors Output channel names, specified as a string, character vector, string array, or cell array of character vectors. If you are using a timetable for the data source, the names in OutputName must be a subset of the timetable variables. Example: sys = oe(tt,__,'OutputName',["y1" "y3"]) selects the variables y1 and y3 as the output channels from the timetable tt to use for the estimation. Ts — Sample time 1 (default) | positive scalar Sample time, specified as the comma-separated pair consisting of 'Ts' and the sample time in the units specified by TimeUnit. When you use matrix-based data (u,y), you must specify Ts if you require a sample time other than the assumed sample time of 1 second. To obtain the data sample time for a timetable tt, use the timetable property tt.Properties.Timestep. Example: oe(umat1,ymat1,___,'Ts',0.08) computes a model with sample time of 0.08 seconds. InputDelay — Input delays 0 (default) | positive integer vector | integer scalar Input delays for each input channel, specified as the comma-separated pair consisting of 'InputDelay' and a numeric vector. • For continuous-time models, specify 'InputDelay' in the time units stored in the TimeUnit property. • For discrete-time models, specify 'InputDelay' in integer multiples of the sample time Ts. For example, setting 'InputDelay' to 3 specifies a delay of three sampling periods. For a system with N[u] inputs, set InputDelay to an N[u]-by-1 vector. Each entry of this vector is a numerical value that represents the input delay for the corresponding input channel. To apply the same delay to all channels, specify 'InputDelay' as a scalar. For an example, see Estimate OE Polynomial Model. IODelay — Transport delays 0 (default) | scalar | numeric array Transport delays for each input-output pair, specified as the comma-separated pair consisting of 'IODelay' and a numeric array. • For continuous-time models, specify 'IODelay' in the time units stored in the TimeUnit property. • For discrete-time models, specify 'IODelay' in integer multiples of the sample time Ts. For example, setting 'IODelay' to 4 specifies a transport delay of four sampling periods. For a system with N[u] inputs and N[y] outputs, set 'IODelay' to an N[y]-by-N[u] matrix. Each entry is an integer value representing the transport delay for the corresponding input-output pair. To apply the same delay to all channels, specify 'IODelay' as a scalar. You can specify 'IODelay' as an alternative to the nk value. Doing so simplifies the model structure by reducing the number of leading zeros in the B polynomial. In particular, you can represent max (nk-1,0) leading zeros as input-output delays using 'IODelay' instead. Output Arguments sys — OE polynomial model idpoly object OE polynomial model that fits the estimation data, returned as an idpoly model object. This model is created using the specified model orders, delays, and estimation options. The sample time of sys matches the sample time of the estimation data. Therefore, sys is always a discrete-time model when estimated from time-domain data. For continuous-time model identification using time-domain data, use tfest. The Report property of the model stores information about the estimation results and options used. Report has the following fields. Report Field Description Status Summary of the model status, which indicates whether the model was created by construction or obtained by estimation Method Estimation command used Handling of initial conditions during model estimation, returned as one of the following values: • 'zero' — The initial conditions were set to zero. InitialCondition • 'estimate' — The initial conditions were treated as independent estimation parameters. • 'backcast' — The initial conditions were estimated using the best least squares fit. This field is especially useful to view how the initial conditions were handled when the InitialCondition option in the estimation option set is 'auto'. Quantitative assessment of the estimation, returned as a structure. See Loss Function and Model Quality Metrics for more information on these quality metrics. The structure has these • FitPercent — Normalized root mean squared error (NRMSE) measure of how well the response of the model fits the estimation data, expressed as the percentage fitpercent = 100 • LossFcn — Value of the loss function when the estimation completes • MSE — Mean squared error (MSE) measure of how well the response of the model fits the estimation data • FPE — Final prediction error for the model • AIC — Raw Akaike Information Criteria (AIC) measure of model quality • AICc — Small-sample-size corrected AIC • nAIC — Normalized AIC • BIC — Bayesian Information Criteria (BIC) Parameters Estimated values of model parameters OptionsUsed Option set used for estimation. If no custom options were configured, this is a set of default options. See oeOptions for more information. RandState State of the random number stream at the start of estimation. Empty, [], if randomization was not used during estimation. For more information, see rng. Attributes of the data used for estimation, returned as a structure with the following fields. • Name — Name of the data set • Type — Data type • Length — Number of data samples • Ts — Sample time DataUsed • InterSample — Input intersample behavior, returned as one of the following values: □ 'zoh' — A zero-order hold maintains a piecewise-constant input signal between samples. □ 'foh' — A first-order hold maintains a piecewise-linear input signal between samples. □ 'bl' — Band-limited behavior specifies that the continuous-time input signal has zero power above the Nyquist frequency. • InputOffset — Offset removed from time-domain input data during estimation. For nonlinear models, it is []. • OutputOffset — Offset removed from time-domain output data during estimation. For nonlinear models, it is []. Termination conditions for the iterative search used for prediction error minimization, returned as a structure with these fields. • WhyStop — Reason for terminating the numerical search • Iterations — Number of search iterations performed by the estimation algorithm • FirstOrderOptimality — $\infty$-norm of the gradient search vector when the search algorithm terminates Termination • FcnCount — Number of times the objective function was called • UpdateNorm — Norm of the gradient search vector in the last iteration. Omitted when the search method is 'lsqnonlin' or 'fmincon'. • LastImprovement — Criterion improvement in the last iteration, expressed as a percentage. Omitted when the search method is 'lsqnonlin' or 'fmincon'. • Algorithm — Algorithm used by 'lsqnonlin' or 'fmincon' search method. Omitted when other search methods are used. For estimation methods that do not require numerical search optimization, the Termination field is omitted. For more information on using Report, see Estimation Report. ic — Initial conditions initialCondition object | object array of initialCondition values Estimated initial conditions, returned as an initialCondition object or an object array of initialCondition values. • For a single-experiment data set, ic represents, in state-space form, the free response of the transfer function model (A and C matrices) to the estimated initial states (x[0]). • For a multiple-experiment data set with N[e] experiments, ic is an object array of length N[e] that contains one set of initialCondition values for each experiment. If oe returns ic values of 0 and the you know that you have non-zero initial conditions, set the 'InitialCondition' option in oeOptions to 'estimate' and pass the updated option set to oe. For opt = oeOptions('InitialCondition','estimate') [sys,ic] = oe(data,np,nz,opt) The default setting of uses the method when the initial conditions have a negligible effect on the overall estimation-error minimization process. Specifying ensures that the software estimates values for For more information, see initialCondition. For an example of using this argument, see Obtain Initial Conditions. More About Output-Error (OE) Model The general output-error model structure is: The orders of the output-error model are: $\begin{array}{l}nb\text{:}B\left(q\right)={b}_{1}+{b}_{2}{q}^{-1}+...+{b}_{nb}{q}^{-nb+1}\\ nf\text{:}F\left(q\right)=1+{f}_{1}{q}^{-1}+...+{f}_{nf}{q}^{-nf}\end{array}$ Continuous-Time Output-Error Model If data is continuous-time frequency-domain data, oe estimates a continuous-time model with the following transfer function: The orders of the numerator and denominator are nb and nf, similar to the discrete-time case. However, the sample delay nk does not exist in the continuous case, and you should not specify nk when you command the estimation. Instead, express any system delay using the name-value pair argument 'IODelay' along with the system delay in the time units that are stored in the property TimeUnit. For example, suppose that your continuous system has a delay of iod seconds. Use model = oe(data,[nb nf],'IODelay',iod). Version History Introduced before R2006a R2022b: Time-domain estimation data is accepted in the form of timetables and matrices Most estimation, validation, analysis, and utility functions now accept time-domain input/output data in the form of a single timetable that contains both input and output data or a pair of matrices that contain the input and output data separately. These functions continue to accept iddata objects as a data source as well, for both time-domain and frequency-domain data. R2018a: Advanced Options are deprecated for SearchOptions when SearchMethod is 'lsqnonlin' Specification of lsqnonlin- related advanced options are deprecated, including the option to invoke parallel processing when estimating using the lsqnonlin search method, or solver, in Optimization
{"url":"https://ch.mathworks.com/help/ident/ref/oe.html","timestamp":"2024-11-10T04:36:58Z","content_type":"text/html","content_length":"172594","record_id":"<urn:uuid:2ca64248-5195-46ce-bbe0-86817eeb5d9f>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00229.warc.gz"}
LET'S GET COZY! DEATH OF A MAD HATTER: A PREVIEW AND GIVEAWAY A Hat Shop Mystery Scarlett Parker and her British cousin, Vivian Tremont, are hard at work at Mim’s Whims—their ladies’ hat shop on London’s chic Portobello Road—to create hats for an Alice in Wonderland themed afternoon tea, a fund-raiser for a local children’s hospital. It seems like a wonderfully whimsical way to pass the hat, and Scarlett and Viv are delighted to outfit the Grisby family, the hosts who are hoping to raise enough money to name a new hospital wing after their patriarch. Unfortunately, the Grisby heir will not live to see it—he’s been poisoned. When traces of the poison are found on the hat Scarlett and Viv made for him, the police become curiouser and curiouser about their involvement. Now the ladies need to don their thinking caps and find the tea party crasher who’s mad enough to kill at the drop of a hat... Pseudonym for Lucy Lawrence. Pseudonym: Josie Belle. Jenn McKinlay is what is known as a dessert-freakosaurus. She has been known to eat leftover birthday cake for breakfast, lunch and dinner and the frozen top tier of her wedding cake didn’t stand a chance of seeing its first anniversary. Luckily, her husband is more of a salt guy. Since the arrival of the hooligans (her sons), she has honed her culinary skills to include cupcakes and has baked and frosted cupcakes into the shape of cats, mice and outerspace aliens to name just a few. Writing a mystery series based upon one of her favorite food groups (cupcakes) is as enjoyable as licking the beater and she can’t wait to whip up the next one. She is the author of another mystery series under the name Lucy Lawrence and lives in Scottsdale, Arizona with her musician husband Chris, their two sons, two cats, one dog and one fish. SOME OTHER BOOKS BY JENN McKINLAY (aka JOSIE BELL/LUCY LAWRENCE): I HAVE ONE COPY OF DEATH OF A MAD HATTER, --U.S. RESIDENTS ONLY --NO P. O. BOXES ---INCLUDE YOUR EMAIL ADDRESS IN CASE YOU WIN! --ALL COMMENTS MUST BE SEPARATE TO COUNT AS MORE THAN ONE! HOW TO ENTER: +1 ENTRY: COMMENT ON WHAT YOU READ ABOVE ABOUT DEATH OF A MAD HATTER THAT MADE YOU WANT TO WIN THIS BOOK, AND DON'T FORGET YOUR EMAIL ADDRESS +1 MORE ENTRY: BLOG AND/OR TWEET ABOUT THIS GIVEAWAY AND COME BACK HERE AND LEAVE ME YOUR LINK +1 MORE ENTRY: COMMENT ON SOMETHING YOU FIND INTERESTING AT JENN McKINLAY'S WEBSITE HERE +1 MORE ENTRY: COMMENT ON ONE WAY YOU FOLLOW MY BLOG. IF YOU FOLLOW MORE THAN ONE WAY, YOU CAN COMMENT SEPARATELY AND EACH WILL COUNT AS AN ENTRY +1 MORE ENTRY: COMMENT ON A CURRENT GIVEAWAY THAT YOU HAVE ENTERED ON MY BLOG. IF YOU ENTERED MORE THAN ONE, YOU MAY COMMENT SEPARATELY FOR EACH TO RECEIVE MORE ENTRIES 6 PM, EST, MAY 26 107 comments: I would like to hear more about the hats that were made for the party. I am a email subscriber. I am a gfc follower I have always loved the story Alice in Wonderland so I am interested in the party theme! I tweeted: http://t.co/MooMcnaGzY I follow with an e-mail subscription. I follow with a RSS feed on my Yahoo page. I follow with Google Friends Connect. I follow with Networked Blogs CarolNWong(at)aol(dot) com I entered for "A Roux of Revenge". I entered for "A Cookbook Conspiracy". I entered for "A Dollhouse to Die For" +5 May FB Bonus Entry #1 +5 May FB Bonus Entry #2 +5 May FB Bonus Entry #3 +5 May FB Bonus Entry #4 +5 May FB Bonus Entry #5 An afternoon tea fund raiser with fun hats - excellent story possibilities! I like the premise of the story. Jenn went to Southern Conn State Univ Follow Facebook Follow email Entered A Cookbook Conspiracy. #5 May Facebook entry 1 #5 May Facebook entry 2 #5 May Facebook entry 3 #5 May Facebook entry 4 #5 May Facebook entry 5 I want to read it because of the title and cover, love it :) tweet https://twitter.com/LuLu_Brown24/status/467640198042759168 Follow NB as Lisa Brown Facebook as Lisa Brown email: jslbrown2009(at)aol(dot)com GFC: smurfette entered: A COOKBOOK CONSPIRACY entered: A DOLLHOUSE TO DIE FOR entered: THE PICKLED PIPER I wonder what kind of poison you'd put in a hat and who would do that. I guess I'll have to read the book to find out. lkish77123 at gmail dot com I found out she loves to skateboard. lkish77123 at gmail dot com I am an email subscriber lkish77123 at gmail dot com I am a GFC follower lkish77123 at gmail dot com I am a bloglovin' follower lkish77123 at gmail dot com +5 May facebook lkish77123 at gmail dot com +5 May facebook lkish77123 at gmail dot com +5 May facebook lkish77123 at gmail dot com +5 May facebook lkish77123 at gmail dot com +5 May facebook lkish77123 at gmail dot com Entered A DOLLHOUSE TO DIE FOR lkish77123 at gmail dot com Entered A COOKBOOK CONSPIRACY lkish77123 at gmail dot com Entered PETALS ON THE WIND lkish77123 at gmail dot com Entered THE PICKLED PIPER lkish77123 at gmail dot com Entered CHESTNUT STREET lkish77123 at gmail dot com definitely a series I need to start!! thank you for the giveaway!! cyn209 at juno dot com entered The Pickled Piper giveaway!! cyn209 at juno dot com entered A Cookbook Conspiracy giveaway!! cyn209 at juno dot com entered A Dollhouse to Die For giveaway!! cyn209 at juno dot com FB +5 May Bonus Points: #1 cyn209 at juno dot com FB +5 May Bonus Points: #1 cyn209 at juno dot com FB +5 May Bonus Points: #2 cyn209 at juno dot com FB +5 May Bonus Points: #3 cyn209 at juno dot com FB +5 May Bonus Points: #4 cyn209 at juno dot com FB +5 May Bonus Points: #5 cyn209 at juno dot com entered Murder Gone A-Rye giveaway cyn209 at juno dot com Wonder why someone would want Grisby killed since he is doing a good deed by raising money for a hospital wing. I found out she was a librarian email subscriber Follow with twitter @houdinibuddy Follow with twitter @houdinibuddy Follow by Facebook Follow with GFC Follow with Networked Blogs +5 MAY FACEBOOK #1 +5 MAY FACEBOOK #2 +5 MAY FACEBOOK #3 +5 MAY FACEBOOK #4 +5 MAY FACEBOOK #5 Entered Cookbook Conspiracy This comment has been removed by the author. Entered A Dollhouse to Die For Entered THE GOODBYE WITCH lkish77123 at gmail dot com Entered A CHILD'S INTRODUCTION TO ART lkish77123 at gmail dot com Entered A TIGER'S TALE lkish77123 at gmail dot com Entered MURDER GONE A-RYE lkish77123 at gmail dot com I think hats are so much fun--I wish they were more popular today. This should be a fun book. I am an email subscriber. I follow on twitter as @Suekey12. I follow on Networked Blogs as Suzan Morrow Farrell. I follow on GFC as Suekey. I follow on Google+ as Sue Farrell. I follow on facebook as Suzan Morrow Farrell. I follow on Blog Lovin as Sue Farrell. May facebook +5 bonus - #1 May facebook +5 bonus - #2 May facebook +5 bonus - #3 May facebook +5 bonus - #4 May facebook +5 bonus - #4 May facebook 5 bonuls - #5 May facebook 5 bonuls - #5 I entered A Cookbook Conspiracy. I entered A Dollhouse to Die For. I entered The Pickled Piper. I entered The Goodbye Witch. I entered Chestnut Street. I entered Chestnut Street. I entered Murder Gone A-Rye. I entered A Tiger's Tale.
{"url":"https://bookinwithbingo.blogspot.com/2014/05/lets-get-cozy-death-of-mad-hatter.html?showComment=1400429859391","timestamp":"2024-11-10T11:39:44Z","content_type":"application/xhtml+xml","content_length":"204888","record_id":"<urn:uuid:4449526f-1d07-4d4d-891d-bf2aa0d09814>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00433.warc.gz"}
ECOO '19 R2 P3 - Ribbon After wrapping a present for her friend's birthday, Elaine discovered that she has a long length of ribbon left over. The ribbon is currently unwound, so she decided that she will fold the ribbon times before putting it away in a small box. To make sure the ribbon will fit, she would like to know what the length and thickness of the ribbon will be once she performs her sequence of folds. The ribbon initially has a length of units and a thickness of unit. When folding a ribbon at a point from the left, the part of the ribbon left of is folded onto the part of the ribbon right of . Formally, for all , the thickness of the ribbon at the point is added to the thickness at the point and the thickness at the point becomes zero. Folding the ribbon from the right is the same up to The thickness of the ribbon is defined as the maximum thickness of any given point. The length of the ribbon is defined as the number of points with non-zero thickness. Given the sequence of folds, can you help Elaine determine the dimensions of the final ribbon? Input Specification The input will contain 10 datasets. Each dataset begins with two integers , (, ), the initial length of the ribbon and the number of folds. The points of the ribbon are numbered from to . The next lines each contain an integer () followed by either L or R, representing a fold at point from either the left or right. The point is guaranteed to be at a point with non-zero thickness. For the first 3 cases, each fold will be of type L. For the first 6 cases, . Output Specification For each dataset, output two space-separated integers: the length and thickness of the ribbon. Sample Input (Two Datasets Shown) 3 L 10 L 10 R Sample Output Explanation of Sample Datasets In the first dataset, the thickness of the ribbon at each point is . In the second dataset, the thickness of the ribbon at each point is . Educational Computing Organization of Ontario - statements, test data and other materials can be found at ecoocs.org There are no comments at the moment.
{"url":"https://dmoj.ca/problem/ecoo19r2p3","timestamp":"2024-11-14T21:43:46Z","content_type":"text/html","content_length":"25834","record_id":"<urn:uuid:ef1ff3a4-b4b6-4422-9ab6-78fbab729ee3>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00624.warc.gz"}
Gradient and Directional Derivatives • Thread starter dmalwcc89 • Start date In summary, the previous exercise involved finding the gradient of f for a given equation and determining a vector normal to the surface at point P = (2, 2, 8) that points towards the xy-plane. The particle in question will pass through the point Q = (2, 2, 0) on the xy-plane and will take 1 second to reach this point if it travels at a constant speed of 8 cm/s along the path c(t) = (2, 2, 8) + t(-4, -2, -1). Homework Statement Suppose, in the previous exercise, that a particle located at the point P = (2, 2, 8) travels towards the xy-plane in the direction normal to the surface. a) Through which point Q on the xy-plane will the particle pass? b) Suppose the axes are calibrated in centimeters. Determine the path c(t) of the particle if it travels at a constant speed 8 cm/s. How long will it take the particle to reach Q? Homework Equations Gradient of F: <dF/dx, dF/dy, dF/dz> The Attempt at a Solution I completed the "previous exercise:" I found the gradient of f after given the equation z^2 - 2x^4 - y^4 = 16 and asked to find vector n normal to this surface at P = (2, 2, 8) that points in the direction of the xy-plane. After normalizing a vector and finding the gradient, I was left with 1/(sqrt. 21)<-4, -2, 1>. The option was either + or - this value, and since (2, 2, 8) lies above the xy-plane, I needed the negative value. My answer was finally -1/(sqrt. 21)<-4, -2, 1>. a) Through which point Q on the xy-plane will the particle pass? The particle travels in the direction of -1/(sqrt. 21)<-4, -2, 1>, so it will move in the negative z direction in the xy-plane. The x and y coordinates of P are (2, 2), so the point Q is at (2, 2, 0). b) Suppose the axes are calibrated in centimeters. Determine the path c(t) of the particle if it travels at a constant speed 8 cm/s. How long will it take the particle to reach Q? The path c(t) of the particle is c(t) = (2, 2, 8) + t(-4, -2, -1) where t is the time the particle travels. The particle travels at a constant speed of 8 cm/s, and the distance between P and Q is 8 cm, so it will take the particle 1 second to reach Q. FAQ: Gradient and Directional Derivatives What is a gradient? A gradient is a mathematical concept that represents the direction and magnitude of the steepest slope of a function at a specific point. What is a directional derivative? A directional derivative is a measure of how a function changes in a specific direction, defined by a vector, at a given point. How is the gradient related to directional derivatives? The gradient is the vector that points in the direction of the steepest slope of a function at a specific point. The directional derivative is the rate of change of the function in that direction. What is the formula for calculating a directional derivative? The formula for calculating a directional derivative is Df(v) = ∇f · v, where ∇f is the gradient of the function f and v is the unit vector representing the direction in which the derivative is being What are some real-world applications of gradient and directional derivatives? Gradient and directional derivatives are used in fields such as physics, engineering, and economics to optimize functions and predict the behavior of systems. For example, they can be used to determine the direction in which a ball will roll on a sloped surface or to find the optimal direction for a plane to fly in order to minimize fuel consumption.
{"url":"https://www.physicsforums.com/threads/gradient-and-directional-derivatives.291163/","timestamp":"2024-11-05T09:52:03Z","content_type":"text/html","content_length":"78079","record_id":"<urn:uuid:57fdd41c-4532-4f94-8755-4a50fffbebe0>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00815.warc.gz"}
PARMS Statement specifies which parameter values PROC GLIMMIX should hold equal to the specified values. For example, the following statement constrains the first and third covariance parameters to equal 5 and 2, respectively: parms (5) (3) (2) (3) / hold=1,3; Covariance or scale parameters that are held fixed with the HOLD= option are treated as constrained parameters in the optimization. This is different from evaluating the objective function, gradient, and Hessian matrix at known values of the covariance parameters. A constrained parameter introduces a singularity in the optimization process. The covariance matrix of the covariance parameters (see the ASYCOV option of the PROC GLIMMIX statement) is then based on the projected Hessian matrix. As a consequence, the variance of parameters subjected to a HOLD= is zero. Such parameters do not contribute to the computation of denominator degrees of freedom with the DDFM=KENWARDROGER and DDFM=SATTERTHWAITE methods, for example. If you want to treat the covariance parameters as known, without imposing constraints on the optimization, you should use the NOITER option. When you place a hold on all parameters (or when you specify the NOITER) option in a GLMM, you might notice that PROC GLIMMIX continues to produce an iteration history. Unless your model is a linear mixed model, several recomputations of the pseudo-response might be required in linearization-based methods to achieve agreement between the pseudo-data and the covariance matrix. In other words, the GLIMMIX procedure continues to update the fixed-effects estimates (and random-effects solutions) until convergence is achieved. In certain models, placing a hold on covariance parameters implies that the procedure processes the parameters in the same order as if the NOPROFILE were in effect. This can change the order of the covariance parameters when you place a hold on one or more parameters. Models that are subject to this reordering are those with R-side covariance structures whose scale parameter could be profiled. This includes the TYPE=CS, TYPE=SP, TYPE=AR, TYPE=TOEP, and TYPE=ARMA covariance structures. enables you to specify lower boundary constraints for the covariance or scale parameters. The value-list specification is a list of numbers or missing values (.) separated by commas. You must list the numbers in the same order that PROC GLIMMIX uses for the value-list in the PARMS statement, and each number corresponds to the lower boundary constraint. A missing value instructs PROC GLIMMIX to use its default constraint, and if you do not specify numbers for all of the covariance parameters, PROC GLIMMIX assumes that the remaining ones are missing. This option is useful, for example, when you want to constrain the proc glimmix; class person; model y = time; random int time / type=chol sub=person; parms / lowerb=1e-4,.,1e-4; Here, the TYPE=CHOL structure is used in order to specify a Cholesky root parameterization for the requests the removal of boundary constraints on covariance and scale parameters in mixed models. For example, variance components have a default lower boundary constraint of 0, and the NOBOUND option allows their estimates to be negative. See the NOBOUND option in the PROC GLIMMIX statement for further details. requests that no optimization of the covariance parameters be performed. This option has no effect in generalized linear models. If you specify the NOITER option, PROC GLIMMIX uses the values for the covariance parameters given in the PARMS statement to perform statistical inferences. Note that the NOITER option is not equivalent to specifying a HOLD= value for all covariance parameters. If you use the NOITER option, covariance parameters are not constrained in the optimization. This prevents singularities that might otherwise occur in the optimization process. If a residual variance is profiled, the parameter estimates can change from the initial values you provide as the residual variance is recomputed. To prevent an update of the residual variance, combine the NOITER option with the NOPROFILE option in the PROC GLIMMIX statements, as in the following code: proc glimmix noprofile; class A B C rep mp sp; model y = A | B | C; random rep mp sp; parms (180) (200) (170) (1000) / noiter; When you specify the NOITER option in a model where parameters are estimated by pseudo-likelihood techniques, you might notice that the GLIMMIX procedure continues to produce an iteration history. Unless your model is a linear mixed model, several recomputations of the pseudo-response might be required in linearization-based methods to achieve agreement between the pseudo-data and the covariance matrix. In other words, the GLIMMIX procedure continues to update the profiled fixed-effects estimates (and random-effects solutions) until convergence is achieved. To prevent these updates, use the MAXLMMUPDATE= option in the PROC GLIMMIX statement. Specifying the NOITER option in the PARMS statement of a GLMM with pseudo-likelihood estimation has the same effect as choosing TECHNIQUE=NONE in the NLOPTIONS statement. If you want to base initial fixed-effects estimates on the results of fitting a generalized linear model, then you can combine the NOITER option with the TECHNIQUE= option. For example, the following statements determine the starting values for the fixed effects by fitting a logistic model (without random effects) with the Newton-Raphson algorithm: proc glimmix startglm inititer=10; class clinic A; model y/n = A / link=logit dist=binomial; random clinic; parms (0.4) / noiter; nloptions technique=newrap; The initial GLM fit stops at convergence or after at most 10 iterations, whichever comes first. The pseudo-data for the linearized GLMM is computed from the GLM estimates. The variance of the Clinic random effect is held constant at 0.4 during subsequent iterations that update the fixed effects only. If you also want to combine the GLM fixed-effects estimates with known and fixed covariance parameter values without updating the fixed effects, you can add the MAXLMMUPDATE=0 option: proc glimmix startglm inititer=10 maxlmmupdate=0; class clinic A; model y/n = A / link=logit dist=binomial; random clinic; parms (0.4) / noiter; nloptions technique=newrap; In a GLMM with parameter estimation by METHOD=LAPLACE or METHOD=QUAD the NOITER option also leads to an iteration history, since the fixed-effects estimates are part of the optimization and the PARMS statement places restrictions on only the covariance parameters. Finally, the NOITER option can be useful if you want to obtain minimum variance quadratic unbiased estimates (with 0 priors), also known as MIVQUE0 estimates (Goodnight 1978b). Because MIVQUE0 estimates are starting values for covariance parameters—unless you provide (value-list) in the PARMS statement—the following statements produce MIVQUE0 mixed model estimates: proc glimmix noprofile; class A B; model y = A; random int / subject=B; parms / noiter; reads in covariance parameter values from a SAS data set. The data set should contain the numerical variable ESTIMATE or the numerical variables Covp1–Covpq, where If the PARMSDATA= data set contains multiple sets of covariance parameters, the GLIMMIX procedure evaluates the initial objective function for each set and commences the optimization step by using the set with the lowest function value as the starting values. For example, the following SAS statements request that the objective function be evaluated for three sets of initial values: data data_covp; input covp1-covp4; proc glimmix; class A B C rep mainEU smallEU; model yield = A|B|C; random rep mainEU smallEU; parms / pdata=data_covp; Each set comprises four covariance parameters. The order of the observations in a data set with the numerical variable Estimate corresponds to the order of the covariance parameters in the "Covariance Parameter Estimates" table. In a GLM, the PARMSDATA= option can be used to set the starting value for the exponential family scale parameter. A grid search is not conducted for GLMs if you specify multiple values. The PARMSDATA= data set must not contain missing values. If the GLIMMIX procedure is processing the input data set in BY groups, you can add the BY variables to the PARMSDATA= data set. If this data set is sorted by the BY variables, the GLIMMIX procedure matches the covariance parameter values to the current BY group. If the PARMSDATA= data set does not contain all BY variables, the data set is processed in its entirety for every BY group and a message is written to the log. This enables you to provide a single set of starting values across BY groups, as in the following statements: data data_covp; input covp1-covp4; proc glimmix; class A B C rep mainEU smallEU; model yield = A|B|C; random rep mainEU smallEU; parms / pdata=data_covp; by year; The same set of starting values is used for each value of the year variable. enables you to specify upper boundary constraints on the covariance parameters. The value-list specification is a list of numbers or missing values (.) separated by commas. You must list the numbers in the same order that PROC GLIMMIX uses for the value-list in the PARMS statement, and each number corresponds to the upper boundary constraint. A missing value instructs PROC GLIMMIX to use its default constraint. If you do not specify numbers for all of the covariance parameters, PROC GLIMMIX assumes that the remaining ones are missing.
{"url":"http://support.sas.com/documentation/cdl/en/statug/63033/HTML/default/statug_glimmix_sect008.htm","timestamp":"2024-11-12T14:31:39Z","content_type":"application/xhtml+xml","content_length":"31783","record_id":"<urn:uuid:8e03aa4f-da0c-43ed-921b-e99e73967715>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00334.warc.gz"}
Daily Savings Calculator: Compound Interest Calculator With Daily Deposits It is important to note that the more frequent the compounding, the more interest will accrue. Daily compounded interest will result in more interest paid than interest compounded monthly or annually. The investing information provided on this page is for educational purposes only. NerdWallet, Inc. does not offer advisory or brokerage services, nor does it recommend or advise investors to buy or sell particular stocks, securities or other investments. NerdWallet, Inc. is an independent publisher and comparison service, not an investment advisor. Recall that the exponent on that formula is the number of compounding periods. Now let’s take a look at what happens at the end of the second quarter. Now, you deposit $135 again, but this time, this deposit will accrue interest using the compound interest formula ten times. Growth Chart The compounding that accrues the most interest is continuous compounding, and after that, the order from highest to lowest interest accrued is daily, monthly, quarterly, semiannually, and annually. When you invest in the stock market, you don’t earn a set interest rate but rather a return based on the change in the value of your investment. We at The Calculator Site work to develop quality tools to assist you with your financial calculations. We can’t, however, advise you about where to invest your money to achieve the best returns for you. During the second year, instead of earning interest on just the principal of $100, you’d earn interest on $110, meaning that your balance after two years is $121. While this is a small difference initially, it can add up significantly when compounded over time. After 20 years, the investment will have grown to $673 instead of $300 through simple interest. The Rule of 72 is a shortcut to determine how long it will take for a specific amount of money to double given a fixed return rate that compounds annually. One can use it for any investment as long as it involves a fixed rate with compound interest in a reasonable range. Simply divide the number 72 by the annual rate of return to determine how many years it will take to double. Because it grows your money much faster than simple interest, compound interest is a central factor in increasing wealth. Compound interest is calculated by multiplying the initial principal amount by one plus the annual interest rate raised to the number of compound periods minus one. The total initial principal or amount of the loan is then subtracted from the resulting value. Money printing by central banks after the Great Recession papered over the financial problems and has many large banks paying interest rates as low as 0.01% or 0.001%. Some of the banks then add in further fees for statements or even charging for deposits, which in turn costs more than the interest earnings. How to Account for Reinvestment The daily compound interest rate is easy to calculate once you have the APR (annual percentage rate). In fact, it is just the opposite of the calculation example in the prior section. In the prior example, 10.95% was the APR and 0.03% was the daily interest rate. While only $0.53 in interest was gained by compounding daily, this is essentially free money that is earned because of more frequent However, when you have debt, compound interest can work against you. The amount due increases as the interest grows on top of both the initial amount borrowed and accrued interest. Your annual interest rate compounds faster than any bank Journal Entries Examples Format How to Explanation account, including savings, money market accounts, and CDs. There are many different places you can save your money with various compounding periods. For example, you could save it in a savings account, a Roth IRA, or a traditional IRA. Compounding Interest Calculator Making regular, additional deposits to your account has the potential to grow your balance much faster thanks to the power of compounding. Our daily compounding calculator allows you to include either daily or monthly deposits to your calculation. Note that if you include additional deposits in your calculation, they will be added at the end of each period, not the beginning. Compound interest is the formal name for the snowball effect in finance, where an initial amount grows upon itself and gains more and more momentum over time. • We are compensated in exchange for placement of sponsored products and, services, or by you clicking on certain links posted on our site. • This may seem a little confusing, but just remember that no matter how many periods over which your principal is compounding, your compounding rate must match the length of the period. • Bernoulli also discerned that this sequence eventually approached a limit, e, which describes the relationship between the plateau and the interest rate when compounding. In order to adjust the rate, we must divide it by 2, since we are now earning 2% per period rather than 4%. This may seem a little confusing, but just remember that no matter how many periods over which your principal is compounding, your compounding rate must match the length of the period. With the compound interest formula, you can determine how much interest you will accrue on the initial investment or debt. You only need to know how much your principal balance is, the interest rate, the number of times your interest will be compounded over each time period, and the total number of time periods. Certificates of deposit (CDs), money market accounts, and savings accounts may pay compound interest on a daily or monthly basis. Get 5 FREE Video Lessons With Uncommon Insights To Accelerate Your Financial Growth This influences which products we write about and where and how the product appears on a page. In the short term, riskier investments such as stocks or stock mutual funds may actually lose value. But over a long time horizon, history shows that a diversified growth portfolio can return an average of 6% annually. Investment returns are typically shown at an annual rate of return. With savings and investments, interest can be compounded at either the start or the end of the compounding period. If additional deposits or withdrawals are included in your calculation, our calculator gives you the option to include them at either the start or end of each period. Use our interest calculator to calculate the possible growth of your savings and investments over time. You can also use several free compound interest calculators online. The second way to calculate compound interest is to use a fixed formula. The first way to calculate compound interest is to multiply each year’s new balance by the interest rate. This means there is a bit more than 52 weeks in the average year, with there being 52 weeks and 1 day in most years while there is 52 weeks and 2 days on leap years. This website is using a security service to protect itself from online attacks. You can also use this calculator to solve for compounded rate of return, time period and principal. This formula applies if the investment is compounded annually, meaning we reinvest the money annually. For daily compounding, the interest rate will be divided by 365, and n will be multiplied by 365, assuming 365 days a year. The compound interest calculator lets you see how your money can grow using interest compounding. For instance, we wanted to find the maximum amount of interest that we could earn on a $1,000 savings account in two years. While compound interest grows wealth effectively, it can also work against debtholders. This is why one can also describe compound interest as a double-edged sword. Putting off or prolonging outstanding debt can dramatically increase the total interest owed. To account for reinvestment, you can re-apply the formula above for each reinvestment period to adjust the principal between each period. We encourage you to seek personalized advice from qualified professionals regarding all personal finance issues. The Annuity Expert is an online insurance agency servicing consumers across the United States. My goal is to help you take the guesswork out of retirement planning or find the best insurance coverage at the cheapest rates for you. Beginning Account Balance – The money you already have saved that will be applied toward your savings goal. When it comes to retirement planning, there are only 4 paths you can choose. For example, $100 with a fixed rate of return of 8% will take approximately nine (72 / 8) years to grow to $200. Bear in mind that “8” denotes 8%, and users should avoid converting it to decimal form. Also, remember that the Rule of 72 is not an accurate calculation. Use the prior assumptions of an initial value of $1,000 and 200 days, and now set the interest rate to “annual” and 10.95%. This will yield the exact same amount as the daily interest rate of 0.03%. Compounding can help fulfill your long-term savings and investment goals, especially if you have time to let it work its magic over years or decades. Most bank savings accounts use a daily average balance to compound interest daily and then add the amount to the account’s balance monthly. Which is better – an investment offering a 5% return compounded daily or a 6% return compounded annually? The following calculator allows you to quickly determine the answer to these sorts of questions. Leave a Comment
{"url":"https://joohuat.com.my/2021/01/22/daily-savings-calculator-compound-interest/","timestamp":"2024-11-14T04:35:33Z","content_type":"text/html","content_length":"160247","record_id":"<urn:uuid:5c12df3f-8b35-4037-9a30-ef7683c6e39c>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00381.warc.gz"}
Pemanfaatan Limbah Kulit Kerang Darah (Aanadara Granosa) Sebagai Pengganti Sebagian Semen Campuran Beton Ningsih, Wahyu (2021) Pemanfaatan Limbah Kulit Kerang Darah (Aanadara Granosa) Sebagai Pengganti Sebagian Semen Campuran Beton. Other thesis, Universitas Islam Riau. Download (3MB) | Preview Concrete is a mixture of cement, coarse aggregate, fine aggregate, and water with or without additives to form a solid mass (SNI 03-2834-2000). Where cement is used as an adhesive for concrete. cement also has one of the elements of limestone which functions as an adhesive by 63%. Seashell waste has a calcium component of 66.7%. Therefore, this study aims to determine the compressive strength and split tensile strength in concrete and to determine the effectiveness of the use of blood clam shells waste (anadara granosa) as a substitute for the weight of cement. This study used a cylindrical sample. There are five variants of the waste shells used, namely 0%, 1%, 3%, 5%, and 7% by weight of cement. The method used for the calculation of concrete mix (mix desingn) is based on SNI 03-2834-2000. Concrete testing was carried out at the age of 28 days. Based on the analysis and discussion, it can be concluded that the compressive strength of concrete using cement substituted for blood clamshell waste (anadara granosa) resulted in an increase in the compressive strength of each increase in the percentage of normal concrete variants, namely 0%, 1%, 3%, 5%, and 7. % respectively, namely 28,309 MPa, 29,818MPa, 30,101 MPa, 30,290 MPa, and 32,366 MPa. Likewise, the strength of the concrete split dance variants of 0%, 1%, 3%, 5% of the weight of cement increased respectively, namely 2.595 MPa, 2.925 MPa, 3.067 MPa, 3.539 MPa, and decreased in the 7% variant of cement weight. namely 3,397 MPa. Blood clam shell waste (anadara granosa) is effectively used as a substitute for part of the weight of cement because it contains high lime so that it can increase the compressive strength and tensile strength of the concrete. The effectiveness of the utilization of shellfish waste on the highest cement weight in the 7% variant of the cement weight was 4.057% of normal concrete. Item Type: Thesis (Other) │ Contribution │ Contributors │ NIDN/NIDK │ Contributors: ├──────────────┼─────────────────┼─────────────────────────┤ │ Sponsor │ Mildawati, Roza │ perpustakaan@uir.ac.id │ Uncontrolled Keywords: Waste of Blood Shells, Compressive Strength, Tensile Strength, Concrete Subjects: T Technology > TA Engineering (General). Civil engineering (General) Divisions: > Teknik Sipil Depositing User: Febby Amelia Date Deposited: 25 Mar 2022 09:53 Last Modified: 25 Mar 2022 09:53 URI: http://repository.uir.ac.id/id/eprint/9561 Actions (login required)
{"url":"https://repository.uir.ac.id/9561/","timestamp":"2024-11-02T03:24:25Z","content_type":"application/xhtml+xml","content_length":"24874","record_id":"<urn:uuid:5d63653e-29a9-4227-a4ac-a0d183d2d445>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00380.warc.gz"}
PERT Practice Test | Free PERT Practice Questions PERT Practice Test Try our free online PERT Practice Test. These practice questions will help you prepare for the Florida Postsecondary Education Readiness Test. The results of your PERT Test will determine your college course placement. Our practice test will help you review your skills in math, reading, and writing. Choose a topic and start your PERT test prep right now! If you are serious about getting a great score on your Pert math test, try out our recommended PERT Math Prep Course. Free PERT Practice Tests More Resources For more test prep, we recommend the PERT Study Pack from Test Prep Online. It includes hundreds of practice questions and is fully updated for 2024. This is the easiest and most effective way to study for your PERT test! PERT Test The Florida Postsecondary Education Readiness Test is more commonly known as the PERT Test. The purpose of this test is to ensure that Florida college students are placed into courses that align with their skills and abilities. The Florida PERT assessment includes 3 diagnostic placement tests covering math, reading, and writing. The PERT Test has 30 questions on each of the topics. There is no time limit, but it normally takes about 3 hours to complete. The test is computer adaptive, which means that the questions automatically become more difficult as you get more answers correct. This keeps the test challenging for all students. If you don’t know an answer, then you should work to eliminate any choices that look incorrect and make your best guess. PERT placement scores are valid for 2 years.
{"url":"https://www.pertpracticetest.com/","timestamp":"2024-11-07T09:25:54Z","content_type":"text/html","content_length":"38519","record_id":"<urn:uuid:19b6347b-abd9-4fc1-84ee-60e2a8ac3f17>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00789.warc.gz"}
IIT JAM Mathematics : Syllabus, Pattern, Books, Tips - VedPrep We all agree that Mathematics is treated as a unique language that people from everywhere use to talk about the universe. Do you know what Charles Darwin said about this incredible subject? According to him, learning maths is like gaining a new kind of understanding. That’s why people who link their careers with mathematics can have a vast world of opportunities. Getting a master’s degree in mathematics can help your career and make you more money, mainly because many good job opportunities exist. If you decide to take the IIT JAM Mathematics 2024 exam and get into prestigious institutes for your master’s, that’s like an extra special thing. With an advanced math degree, you can find essential jobs and specialize in different areas. You might work for big companies, top accounting firms, health groups, or even people who build things. And if you want to be a math teacher, some places might want you to have a master’s degree in math before you can teach or have a permanent job. Choosing IIT JAM Mathematics for Your Bright Future – Scope of Maths Having a bachelor’s degree is usually okay for starting many jobs. But if you enjoy using math for research, having a master’s degree in math can give you an advantage. Choosing IIT JAM Mathematics lets you join good colleges in India for your master’s. If you’re wondering whether getting a master’s degree in mathematics from a reputed college is a good idea, here are some excellent jobs you could consider. Actuary: Actuaries help big companies and insurance firms make smart decisions about money and risk. They use math to determine how much things might cost and how to avoid problems. Cryptographer: This job is all about keeping important information safe using technology. It’s like a secret code job; banks and credit card companies must protect people’s data. Statistician: Statisticians study numbers and data to understand things better. They work in many areas like businesses, government, and even medicine. Banking and Finance: You can work in banks and finance without a math degree, but having one can make you stand out as you move up. Quantitative Analyst: These clever folks use computers and math to decide if investments might be risky. They work in spots like investment banks and insurance companies. For this job, you usually need a master’s degree in math, and some places even really like it if you have a Ph.D. IIT JAM Eligibility For Mathematics 2024 Suppose you have finally decided whether you want to opt for IIT JAM mathematics as your master’s degree and path for your career. In that case, you must determine the eligibility criteria to apply for the IIT JAM Mathematics exam. Eligibility Criteria will give you an idea about what qualifications you should possess and what other rules you should follow before becoming a part of this exam. Below, we are giving you some critical insights about what IIT JAM Eligibility for Mathematics is all about. Please have a clear look at the same. An Overview IIT JAM Eligibility for Mathematics Eligibility Criteria Details Nationality Indian or Foreign Nationals Educational Qualification Bachelor’s Degree in Respective Subjects Age No Requirements IIT JAM Eligibility for Mathematics – Nationality Regarding who can apply, people from India and other countries can take part in the IIT JAM Mathematics exam. IIT JAM Eligibility For Mathematics – Educational Qualification Minimum Educational Qualifications for Admission Test Paper Academic Programme Institute Essential code Essential subjects in bachelor’s degree along with minimum duration subjects at (10+2) level M.Sc. IITGN, IITK No Restrictions Mathematics IITD, IITR No restrictions for engineering degrees. For B.Sc./B.S. degree, Mathematics for at least two years/four semesters. No Restrictions IITB, IITI, IITJ, IITM, IITP, IITPKD, Mathematics for at least two years/four semesters M.Sc. Mathematics/ Mathematics and IITH No restrictions for engineering degrees. For B.Sc./B.S. degree, Mathematics for at least two years/four No Restrictions Computing semesters M.Sc.-M.Tech. Dual Degree in Mathematics- Data and IITJ Mathematics for at least two years/four semesters No Restrictions Computational Sciences M.Sc. Mathematics and Computing IITBhilai, IITG Mathematics for at least two years/four semesters Mathhematics IITISM Mathematics for at least two years/four semesters Mathematics (MA) M.Sc. Mathematics and Statistics IITTP Mathematics for at least two years/four semesters Mathematics Joint M.Sc.-Ph.D. in Mathematics IITBBS Mathematics/ Statistics as a subject for at least two years/four semesters No Restrictions IITKgp Mathematics/Statistics subjects for six semesters / three years Mathematics M.Sc.-Ph.D. Dual Degree in IITB Mathematics or Statistics as a subject for at least two years/four semesters. No Restrictions Operations Research M.Sc. Applied Mathematics IITMandi Mathematics for at least two years/four semesters Mathematics Bachelor’s degree in Science (B.Sc. or equivalent) of minimum three years’ duration, “with any one of Chemistry, M.Sc.-Ph.D. Dual Degree in Energy IITB Mathematics and Physics for two years/four semesters and any” one of the remaining two subjects for at least one No Restrictions year/two semesters M.Sc.-Ph.D. Dual Degree in Any one of Biology, Biotechnology, Chemistry, Mathematics and Physics for two years/four semesters, and any one Environmental Science and IITB of the other four subjects for at least one year/two semesters. Mathematics Joint M.Sc.-Ph.D. in Atmosphere IITBBS Mathematics and Physics with any one of these subjects among Chemistry, Computer Science, Computer Application, No Restrictions and Ocean Sciences Geology, and Statistics IIT JAM Eligibility For Mathematics – CCMN Counselling After passing the JAM Exam, candidates can also enter NITs for studies. However, they must meet the CCMN Eligibility Criteria for the NITs they want to join. To participate in the CCMN Counseling Process, candidates should have at least 60% overall marks (for Gen/EWS/OBC) or 55% (for SC, ST, PwD) in their qualifying degree. IIT JAM Eligibility For Mathematics – Medical People who have color blindness or only one functioning eye cannot apply. For fieldwork, candidates with disabilities should be able to walk in the field by themselves. This includes walking on regular roads and rough paths. IIT JAM Mathematics Syllabus – All About Marking Scheme and Weightage It’s crucial to know what you’ll be learning for the exam. The syllabus for IIT JAM Mathematics is based on what you’ve studied in high school and college. Once you understand the whole IIT JAM Mathematics Syllabus for 2024, it’s a good idea to see how many marks each topic is worth. This helps you make a study plan that’s just right for the exam. Below, we are giving the topics from the syllabus that you should prepare if you want to crack the IIT JAM Exam for Mathematics. • Sequences and Series of Real Number • Function of One Real Variable • Function of Two or Three Real Variables • Integral Calculus • Differential Equation • Group Theory • Finite-Dimensional Vector Spaces • Matrices If you want to get full coverage of the mathematics syllabus, you should read the IIT JAM Syllabus 2024, and from there, you will also be able to download the IIT JAM Mathematics Syllabus PDF. Marking Weightage for IIT JAM Mathematics (For Each Unit) If you are still kickstarting your preparation for IIT JAM Mathematics and fear how you can complete the entire syllabus, here is good news for you. To help you know which topic has more weightage in the exam, we made it possible for you to know the complete marking weightage for each unit. It will help you to know which topics are more important in the exam, and you will prepare accordingly. This will save you time and let you focus on the important topics. We are sure with the help of the IIT JAM Mathematics unit-wise weightage, you will prepare to manage your stress. Topic Weightage of Marks Real Analysis 21% Calculus of Single Variable 18% Linear Algebra 14% Calculus of Two Variables 14% Vector Calculus 12% Differential Equation 11% Abstract Algebra 10% IIT JAM Mathematics Marking Scheme – A Quick Overview Understanding how the marking scheme works in the IIT JAM Mathematics exam can significantly impact your preparation and performance. It provides insights into how marks are allocated, helping you strategize your approach and focus on areas that contribute the most to your score. There are three sections in the exam. All the sections combined cover 100 marks. Section A Marking Scheme: This part is split into two. The first part has 10 Multiple Choice Questions, each worth one mark. If you answer these 1-mark questions incorrectly, you lose 1/3 of a mark. The other 20 Multiple Choice Questions are worth two marks each. For these, if you get them wrong, 2/3 of the marks will be taken away. Section B Marking Scheme: This part includes 10 Multiple Select Questions, each worth two marks. There’s no penalty for getting these questions wrong. Section C Marking Scheme: Here, you’ll find 30 Numerical Answer questions. You’ll get one mark for ten questions and two marks for the rest. But don’t worry, in Section C, there’s no deduction of marks for wrong answers. IIT JAM Mathematics Books – Download Study Material If you aim to crack IIT JAM Mathematics in a single crack, the must-have thing should be the IIT JAM study material you use to complete your preparations. At VedPrep, you can have the best study material, lectures, and doubt solutions. This is why VedPrep is Delhi’s best coaching institute for IIT JAM mathematics. We are also suggesting here some recommended books for IIT JAM mathematics. Below, you will find links to our sample study materials that will help you to make decisions. • IIT JAM Mathematics Syllabus PDF Download • IIT JAM Mathematics Question Papers with Solutions PDF • IIT JAM Mathematics Books PDF • IIT JAM Mathematics Video Lectures • Proper Study Plan For Biotechnology Exam • Expert Guidance For Your Preparation • Demo Class For Your Assurance IIT JAM Preparation Books Mathematics (Recommended) Referring to the best books is also very important to complete your preparation. Here, we suggest some of the best books for IIT JAM mathematics that you can read. IIT JAM Mathematics Books for Calculus Name of the Book Author Calculus M. J. Strauss, G. L. Bradley and K. J. Smith Calculus Thomas Finney Calculus H. Anton, I. Bivens and S. Davis Integral Calculus Gorakh Prasad A text –Book of Vector Calculus Shanti Narayan Vector Calculus Murray R. Spiegel (Schaum’s), A.R.Vasishtha Calculus Pisknov IIT JAM Mathematics Books for Real Analysis Name of the Book Author Introduction to Real Analysis R. G. Bartle and D. R. Sherbert A Course in Calculus and Real Analysis Sudhir R. Ghorpade and Balmohan V. Limaye A course of Mathematical Analysis Shanti Narayan A first course in Mathematical Analysis Somasundaram, Choudhari Elementary Analysis: The Theory of Calculus K. A. Ross Fundamentals of Real Analysis V.K Krishnan Mathematical Analysis Apostol T.M Mathematical Analysis Binmore K.G Methods of Real Analysis Richard. R Goldberg Principles of Mathematical Analysis Rudin .W Real Analysis H.L. Royden IIT JAM Mathematics Books for Differential Equation Name of the Book Author Advanced Engineering Mathematics Sastri S.S Advanced Engineering Mathematics Wylie, C.R. and Burrett, L.C Differential and integral equations Collins, P.J Differential equations and the calculus of variations Yankosky Differential equations and their applications Ahsan, Z Differential Equations G.F.Simmons Partial differential equations – methods and applications Mcowan, R.C IIT JAM Mathematics Books for Differential Calculus Name of the Book Author Differential calculus Balachanda Rao and C.K Santha Differential Calculus Gorakh Prasad IIT JAM Mathematics Books for Probability and Statistics Name of the Book Author A Brief Course in Mathematical Statistics Hogg, R.V. and Tanis, E.A A text Book of Statistics C.E Weatherban An Introduction to Probability and Statistics Rohatgi, V. K., and Saleh, A. K An Outline of Statistical Theory Goon, A.M., Gupta, M.K. and Dasgupta, B Fundamentals of Mathematical Statistics Gupta, S.C. and Kapoor, V.K Introduction to Probability Models Ross, S. M Introduction to the Theory of Statistics Mood, A.M., Graybill, F.A. and Boes, D Mathematical Statistics Ray & Sharma Statistics Murray & Spiegel We hope you have a clear idea about the best books for IIT JAM Mathematics. You can choose any of them from the suggestions and strengthen your preparation. IIT JAM Cut Off Mathematics – What are The Minimum Qualifying Marks IIT JAM Mathematics cut-offs can act as crucial planners for your preparation journey. Cut-offs are the minimum qualifying marks that are needed to be scored for entering into the further selection If you ask what factors determine the IIT JAM Cut offs, then let me tell you that there are factors like the difficulty level of the paper, number of applicants, and the seats available. Let’s now check the Previous Year’s IIT JAM Cut Off Mathematics 2024 • General – 24.56 • EWS/OBC (NCL) – 22.10 • ST/SC/PwD – 12.38 How to Prepare for IIT JAM Mathematics – Tips and Tricks Achieving success in IIT JAM Mathematics involves working hard, being committed, and planning smartly. The advice shared in this article can significantly increase your chances of doing well. So, get a pen and paper, and be ready to jot down valuable notes on how to excel in IIT JAM Mathematics. Create Your Study Plan Before starting your IIT JAN Mathematics preparation, the first step is to craft a study plan. Start with breaking down your syllabus by keeping in mind the weightage. Give more time from the IIT JAM Mathematics syllabus to topics with higher weightage. Then end up allocating the time for each topic. Regularly Practice Mock Test The secret to a success-proven preparation strategy is to take mock tests regularly. A mock test will make you face your accurate preparation. It will let you know how well you are prepared and where to improve. By taking mock tests, you will also get habitual to the exam pattern and the time limits under which you have to complete your paper. Learn How To Manage Time The key to success in the IIT JAM Mathematics exam is how well you manage your time. We all know that the test comes with a limited time, and you have to learn to solve MCQs quickly. Try giving yourself specific time while you solve questions, and try to limit the time day by day. Use The Right Resources Having suitable study materials gives you a clear picture of the exam. Select the right books and read the syllabus thoroughly. Find good practice tests and join a study group. This will significantly help you prepare effectively for the IIT JAM Mathematics exam. Practice Previous Year Question Papers Using IIT JAM mathematics previous year question papers is crucial for JAM Maths preparation. They reveal the exam pattern, guide you on important topics, and help you practice time management. Analyzing these papers enhances your readiness, increasing your chances of passing the exam. IIT JAM Mathematics FAQs – Your Questions Answered FAQs help you find the correct answers to your queries. So, we have included some common but essential frequently answered questions related to the IIT JAM Mathematics exam. I hope this will help you in solving your doubts. Que 1. Can I give IIT JAM in mathematics? Ans 1. You can apply if you have a bachelor’s degree in the respective field. Que 2. Is IIT JAM mathematics hard? Ans 2. The difficulty level of Mathematics is moderate to challenging. Que 3. What is the salary of IIT JAM math? Ans 3. The salary package ranges from INR 20-30 lakh per annum for top IITs, while for other IITs, it is between INR 10-20 lakh per annum. Que 4. Are six months enough for IIT JAM Maths? Ans 4. Candidates should start their IIT JAM preparation at least 10-12 months before the exam to complete the syllabus on time and get sufficient revision time. Que 5. Can an average student crack IIT JAM Mathematics? Ans 5. By adopting a well-structured preparation approach, maintaining consistent study habits, practicing efficient time management, and fostering a positive mindset, even students with weaker or average academic backgrounds can crack the IIT. Que 6. What can I do after MSC maths? Ans 6. After finishing your M.Sc in Mathematics, you have many job options. You could become a Mathematician, Statistician, Teacher, Software Developer, Sound Engineer, Financial Analyst, Investment Analyst, Meteorologist, Astronomer, Research Scientist, Data Scientist, Data Analyst, Game Designer, Chartered Accountant, and more. This article aims to give you all the details about IIT JAM Mathematics 2023, which will support your exam preparation. If you have any questions about this article, please ask in the comments below. You can also become part of India’s top Learning Community for IIT JAM, where experts from India will help you with any doubts about the exam. To join, download the VedPrep App. Thanks for reading!
{"url":"https://vedprep.com/iit-jam-mathematics/","timestamp":"2024-11-03T06:45:34Z","content_type":"text/html","content_length":"132774","record_id":"<urn:uuid:7cb01ec8-7490-49f0-b3cc-ae14cd637941>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00819.warc.gz"}
[Solved] Determine the mean and variance of the ra | SolutionInn Determine the mean and variance of the random variable in Exercise 3-20 Determine the mean and variance of the random variable in Exercise 3-20 Fantastic news! We've Found the answer you've been seeking! Step by Step Answer: Answer rating: 33% (3 reviews) Answered By Felix Onchweri I have enough knowledge to handle different assignments and projects in the computing world. Besides, I can handle essays in different fields such as business and history. I can also handle both short and long research issues as per the requirements of the client. I believe in early delivery of orders so that the client has enough time to go through the work before submitting it. Am indeed the best option that any client that can think about. 4.50+ 5+ Reviews 19+ Question Solved Students also viewed these Statistics questions Study smarter with the SolutionInn App
{"url":"https://www.solutioninn.com/determine-the-mean-and-variance-of-the-random5","timestamp":"2024-11-13T14:07:07Z","content_type":"text/html","content_length":"78435","record_id":"<urn:uuid:cd49e2d7-8096-41c4-8a93-ae444fae7b8e>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00715.warc.gz"}
Residual Variance (Unexplained / Error) Residual Variance (also called unexplained variance or error variance) is the variance of any error (residual). The exact definition depends on what type of analysis you’re performing. For example, in regression analysis, random fluctuations cause variation around the “true” regression line (Rethemeyer, n.d.). The total variance of a regression line is made up of two parts: explained variance and unexplained variance. The unexplained variance is simply what’s left over when you subtract the variance due to regression from the total variance of the dependent variable (Neal & Cardon, 2013). In ANOVA, within groups variance and residual variance refer to the same thing. In multilevel modeling, residual variance is a reflection of the within-groups effect (Garson, 2019). Large residual variance coefficients indicate large differences within-groups (Xie, 2009). In ANOVA, Within-group variation is synonymous with residual variance. Symbol for Residual Variance The symbols σ or σ^2 are often used to denote unexplained variance. Make sure you know the author’s intent before trying to interpret residual variance: σ may also mean standard deviation, sample standard deviation or standard error of coefficient estimates (Rethemeyer, n.d.). Coefficient of Nondetermination Residual variance, 1 – r^2 is sometimes called the coefficient of nondetermination. The coefficient of determination, R^2, shows how differences in one variable can be explained by a difference in a second variable. Therefore, 100% of a dependent variable’s variance can be explained by r^2 plus the “unknown”: the unexplained variance (Meyers et al., 2006). Garson, D. (2019). Multilevel Modeling: Applications in STATA®, IBM® SPSS®, SAS®, R, & HLM™. 1st Edition. Meyers, L. et al. (2006). Applied Multivariate Research: Design & Interpretation. Neale, M. & Cardon, L. (2003). Methodology for Genetic Studies of Twins and Families (Nato Science Series D). Springer. Rethemeyer, K. Commonly Used Terms, Symbols, and Expressions. Retrieved April 27, 2019 from: https://www.albany.edu/faculty/kretheme/PAD705/SupportMat/CourseTerms.pdf Xie, Y. (2009). Sociological Methodology Volume 39: 1st Edition. Wiley-Blackwell.
{"url":"https://www.statisticshowto.com/residual-variance/","timestamp":"2024-11-03T00:03:16Z","content_type":"text/html","content_length":"68788","record_id":"<urn:uuid:876f5346-f0b1-42ea-8753-1f6beebf404a>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00753.warc.gz"}
Get short concatenated linear and exponential functions @ > Home > Contents > Get short concatenated linear and exponential functions Excel 4+ The German income tax uses a fitted function with no breaks in between: 0 - 7.664 Euro (tax free basis): 0; 7.665 - 12.739 Euro: (793,10y + 1.600)y; 12.740 - 52.151 Euro: (265,78z + 2.405)z + 1.016; 52.152 an more Euro: 0,45x - 8.845. "y" and "z" are ten thousands of the exceeding parts of the taxable income. "x" is the full taxable income. How is a manageable formula possible on this? The tax free basis (Euro 7664) is not needed in the formula. A short description of the formula follows: Row 2 and 3: functions meeting points Row 4: exponential part Row 5: linear part Row 6: f(0) For a really short formula of concatenated but only linear functions see 5022.htm.
{"url":"http://xxcl.de/5023.htm","timestamp":"2024-11-14T14:50:37Z","content_type":"text/html","content_length":"2206","record_id":"<urn:uuid:0f3d6cb9-1144-4060-b53d-9aee92facef4>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00173.warc.gz"}
Course Report: Historical Quantum Mechanics In the last report from my modern physics course, we wrapped up Relativity, and started into quantum mechanics, talking about black-body radiation and Planck's quantum hypothesis. The next few classes continue the historical theme Class 10: I make a point of noting that Planck himself never liked the idea of quantization of light, and in fact never applied the idea to light directly. His quantum model for black-body radiation was based on the idea of having "oscillators" in the object emitting the radiation. Einstein was the first to apply the idea of quantization to light directly, and take the whole thing seriously. The next step in the story is the photoelectric effect, and I discuss this in the style of the Six Ideas That Shaped Physics text, imagining ideal experiments that can be done with ideal ammeters and voltmeters. If you take two metal plates connected by an ideal ammeter, and illuminate one of them, the liberated electrons will travel to the other plate and back to where they started through the ammeter, giving you a measure of the number of electrons produced. If you replace the ammeter with an ideal voltmeter, the electrons collect on the other plate, and build up until the repulsive force from the extra negative charge is strong enough to stop the next electron from reaching the plate. At that point, the potential difference between the plates is the "stopping potential," which is equal to the maximum kinetic energy of an emitted electron. I talk about the predictions the wave model of light makes for the photoelectric effect, and how they utterly fail to match the experimental observations. Then I talk about Einstein's quantum model, in which a single photon supplies all the energy, and how that fits the data better. I also mention Millikan's experiment and his grudging agreement that the Einstein model works. That takes a bit more than half a class, including one example problem (because the photoelectric effect lends itself to two-equations-two-unknowns problems, and the more practice they get with those, the better), so the second half of the class is devoted to the other great particle model of light experiment, the Compton effect. I talk about how relativity says that a photon should have momentum that is inversely proportional to the wavelength, and that an X-ray scattering off an electron should lose some momentum, and end up at a longer wavelength. I don't go through the derivation in detail, saving that particular carnival of algebra for a homework problem. This is probably the single lecture from this course that I've given most often. We used to do the photoelectric effect in the first-year E & M course as well, and it's only a little longer than the spiel I gave for the physical constants workshop back in December. I could more or less give this one in my sleep. Class 11: Having dealt with light as a particle, we turn to electrons as waves, starting with the Bohr model of hydrogen. I talk about the empirical success of the Rydberg formula, and how a classical atom-as-solar-system model can't possibly work. Then I explain how Bohr showed that you can explain the spectrum of hydrogen if you make a couple of really odd ad hoc assumptions about the behavior of electrons in atoms. I spend about half of the class showing how these assumptions let you reproduce the observed spectrum of hydrogen and hydrogen-like ions. I make a point of stressing that the Bohr model is still wrong-- it's a miserable failure for atoms more complicated than hydrogen, and is totally lacking in physical justification for its weird assumptions. At the very end of the class, I squeeze in Louis de Broglie, and the idea that electrons should behave as waves. I show how assuming that electrons are waves lets you justify the Bohr quantization condition by saying that the stationary states of hydrogen are states for which the electron orbit is a standing wave. Which is really weird, but might be crazy enough to be true. I got blanker looks than usual when I mentioned standing waves. I didn't figure out why until Monday, when I realized that we have once again changed the curriculum for the courses before this one, and standing waves aren't covered in those courses any more. We really need to stop changing the goddamn curriculum, because this is about the third time I've taught this class without knowing the exact background of the students taking it. Class 12: Having introduced the idea of de Broglie waves, we move on to direct proof of the wave nature of the electron, in the form of the Davisson-Germer experiment. Davisson and Germer were studying the scattering of electrons off nickel, and seeing nothing too surprising until they broke their apparatus. In the process of repairing things, they inadvertently melted their nickel target, which settled into a single crystal when it cooled back down. After the repairs, they saw a huge number of electrons coming out at one particular angle, which was a signature of diffraction. I do a two-minute recap of Bragg diffraction of x-rays, then explain how the Davisson-Germer case is similar but not identical to the Bragg situation. I work through how to find the diffracted peaks, and show that when you use the de Broglie formula for the electron wavelength, it fits the Davisson-Germer result very nicely. From this, I segue into a slightly hand-wavy discussion of wave packets, and how something like a gaussian wave packet is about the best you can do for describing something like a photon or an electron that has both wave and particle properties. This is more or less the same as Chapter 2 of the book-in-progress, and here as there, I use the wave packet stuff to get the idea of the uncertainty principle. This also leads nicely into: Class 13: I spent this period having the class look at traveling wave solutions in Mathematica, as a way to get a sense of how to use the program. For homework, I ask them to work through another Mathematica notebook on superposition of waves and Fourier series, which lets them see semi-quantitatively that producing a sharp change in a wave pattern requires many more Fourier components than a broader distribution. This sets up Friday's lecture, in which I try to be a little more quantitative about uncertainty. The Mathematica experiment was marred slightly by two things: 1) two students missed Monday's class, and thus didn't get the basic tutorial on Mathematica, which means I can expect some panicky email at about 11pm Thursday night (homework is due Friday) asking how to do the homework, and 2) Mathematica has gone through a version upgrade since the last time I did this, meaning that some of the steps in the notebook (which I cribbed from somebody else) have been rendered obsolete by newer commands. I'll have to re-write them before the next time I teach this. Today's lecture will also be a technical review. I found out a few years ago that contrary to the expectation of basically everybody in physics, the math department does not discuss complex exponentials in the calculus sequence. Which means that students don't see Euler's theorem in any of the math classes that are required for the physics major, which makes solving the Schrödinger Equation a little rocky when they first hit it in my class. Thus, Class 14 is The Swashbuckling Physicist's Guide to Complex Numbers, in which I demonstrate the important properties of complex numbers through plausibility arguments and hand waving, rather than formal proofs. Mathematicians would undoubtedly be appalled, but if they want students to see something more mathematically rigorous, they should teach it themselves. More like this As I endlessly repeat, I'm an experimentalist by training an inclination, so I especially appreciate stories about experimental science. There's something particularly wonderful about the moment when an experiment clicks together, usually after weeks or months of hard, frustrating work, when things… Today's equation in our march to Newton's birthday is actually a tiny bit out of order, historically speaking: This is the Rydberg formula for the wavelengths of the spectral lines in hydrogen (and hydrogen-like ions), with R a constant having the appropriate units, and the two n's being two… Over at Dot Physics, Rhett is taking another whack at photons. If you recall, the last time he did this wasn't too successful, and this round fares no better: So back to the photon. In my original post I made the claim that the photoelectric effect is not a great experiment to show photons. Maybe… So, yesterday was my big TEDxAlbany talk. I was the first speaker scheduled, probably because I gave them the title "The Exotic Physics of an Ordinary Morning," so it seemed appropriate to have me talking while people were still eating breakfast... The abstract I wrote when I did the proposal… I'm a mathematician and I'm not too appalled by the lack of formal proofs about the complex numbers. The nice thing about them is that plausibility arguments DO hold up--the complex numbers are very well-behaved. You could get really rigorous and construct the complex numbers from the reals, which are constructed from the rationals, etc., and go on about algebraically closed fields, but it wouldn't be any more enlightening to the students. The best argument I've seen for complex numbers is their requirement for solving cubic equations with real coefficients and real solutions-- the intermediate steps require them, even if they cancel in the solution. It is interesting to remember that back then (16th century), negative numbers were still sometimes called "fictitious" Physics majors aren't required to take a course on complex numbers at your school? Perhaps the some of your students will get something from starting at chapter 5. I leaned more about complex numbers from this site than I ever learned in math. It is interesting to remember that back then (16th century), negative numbers were still sometimes called "fictitious" In a way, they still are. "Negative" comes from a Latin word meaning "to deny". In this case, what they were denying was the reality of numbers less than zero. Because until the invention of credit, you couldn't have less than nothing. I have found that students can relate to the idea of complex numbers as "rotational numbers", that is, they are a convenient mathematical device for doing rotational algebra. That is, if I want to add two angles u and v, I can look at cos(u+v) = cos(u)cos(v) - sin(u)sin(v) sin(u+v) = sin(u)cos(v) + cos(u)sin(v). Then things work out dandy if I define cos(u+v) + i sin(u+v) =(cos(u) + i sin(u))(cos(v) + i sin(v)) where i^2 = -1. So I can sell the utility of the complex plane in terms of the utility of keeping track of angles. Which is not too far from the truth.
{"url":"https://scienceblogs.com/principles/2009/02/04/course-report-historical-quant","timestamp":"2024-11-05T20:43:21Z","content_type":"text/html","content_length":"53078","record_id":"<urn:uuid:7353c811-9d68-4849-8e37-860772a40eb4>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00536.warc.gz"}
Modelling, Dynamics and Control - Section four: Finite horizon predictive control laws Chapter eight Predictive control Finite horizon predictive control laws: definitions and tuning This chapter introduces finite horizon predictive control law definitions and how these result in a fixed form control law, for the constraint free case. This is followed by a discussion of the 'so-called' tuning parameters and the impact these have on closed-loop behaviour. The initial resources summarise the GPC algorithm, but without any real discussion of what constitutes good and bad choices for the input/output horizons and control weighting. This chapter uses a large number of illustrations to help the viewer understand the role of these parameters better and consequently to make an intuitively sensible proposals about what choices are definitely bad. The insights lead to a proposal for a moderately systematic design procedure which is validated on a number of examples. Looks at logical prediction structures for predictive control and the degrees of freedom within these. Also demonstrates how one can form compact representations of performance indices which enable simple matrix/vector algebra and optimisation. 2. GPC performance index and control law Shows how one can combine the predictions and a performance index in order to form a GPC control law. Includes an aside on optimisation of multivariable functions. The law is given in matrix/vector format and shown to be linear in the core signals (target, input and output). Shows how the GPC control law expressed in matrix/vector format can be interpreted as being equivalent to a transfer function implementation. Gives the associated closed-loop block diagram and the details required to form the control law parameters. 3. GPC loop analysis and MATLAB simulations Demonstrates how the closed-loop control law parameters and associated pole polynomial can be computed for the SISO and MIMO cases. Demonstrates some simple MATLAB code for developing and implementing a GPC control law, SISO and MIMO case. Code is highly transparent but also simple so that users can edit easily, modify horizons, weights, models and overlay responses for different choices. Code is available on the Google Site. 4. Sensitivity of GPC laws Introduces the sensitivity functions for a simple GPC feedback loop. Demonstrates through examples that although disturbance rejection is good, noise rejection may be very poor and indeed could be considered unsatisfactory. Introduces the T-filter. What is it and why is it included? How is it included and how does this affect the predictions to be deployed in a GPC control law. How does the T-filter change behaviour? Derives the GPC law with predictions based on a T-filter and then looks the changes to the closed-loop transferences and sensitivity with a T-filter and shows how these compare to GPC without a Uses MATLAB to illustrate code required to do GPC with a T-filter and compare behaviour with and without a T-filter. Considers prediction, control law formulation, simulation and sensitivity. Code is all available on the Google Site. 6. State-space models in GPC Looks at how the GPC control law changes if one wants to use a state space model. For convenience, proposes a slightly different choice of performance index. Also demonstrates the equivalent state feedback and hence how once could determine closed-loop poles. 7. Dynamic Matrix Control Uses the context of GPC in the earlier videos to introduce DMC and thus highlight the main conceptual similarities and differences. 8. Using independent model formulations in GPC Introduces the concept of an 'independent model'. Shows how this differs from more conventional prediction models and the impact this has on the construction of the control law. Backed up with MATLAB code and demonstrations. 9. Examples of the impact of different horizon choices Demonstrates through example how some 'popular' choices for the horizons can actually lead to every poor behaviour. Uses this as a motivation for the need to investigate and understand the role of the horizons and weighting more carefully. 10. Long and short output horizons in GPC Gives a number of illustrations of GPC predictions with short output horizons and demonstrates how the associated predictions are often very poor which in turn suggests the GPC optimisation is ill-posed and not to be trusted. Gives a number of illustrations of GPC predictions with long output horizons and demonstrates how the associated predictions can be very good and thus lead to a GPC optimisation which is well-posed and can be be trusted. However, also shows this insight does not necessarily apply to systems with poor open-loop unstable dynamics (e.g. unstable) and moreover is dependent on an appropriate choice of input horizon. 11. Long and short input horizons in GPC Looks at choices of input horizon equal to two and demonstrates that for many cases this is not sufficiently flexible to give good predictions and thus cannot lead to an expectation of good closed-loop behaviour. Uses overlays of predictions with many different choices of control horizon to demonstrate how this parameter affects the prediction. It is made clear that the input horizon cannot be considered in isolation from the output horizon. In general one can only have confidence in the predictions if both the input and output horizons are 12. The impact of control weighting in GPC Demonstrates that the input weighting parameter has a limited range of efficacy which is linked to the horizons. Moreover, it is shown that the required horizons for a well-posed optimisation are strongly linked to the choice of this weighting. Also demonstrates that the parameter must be used with care with unstable open-loop systems. 13. Controlling open-loop unstable processes with GPC Discusses how the basic set up of GPC is not appropriate for designing a compensator for unstable open-loop systems and that any control law that 'works' is likely to do so more by luck than good design and thus should be treated with caution. Gives a brief review of historical adaptions to cope with these scenarios. 14. Systematic choices of horizons in GPC Draws together the insights of the previous seven videos and proposes a systematic choice for the input and output horizons to ensure that the GPC optimisation is well posed and therefore likely to give a sensible answer. Demonstrates the approach with some numerical examples.
{"url":"https://controleducation.sites.sheffield.ac.uk/introtompcbook/gpc","timestamp":"2024-11-13T14:38:55Z","content_type":"text/html","content_length":"132974","record_id":"<urn:uuid:0cade836-0183-4524-8882-5f0bb666a987>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00221.warc.gz"}
Vertex-edge Roman domination in graphs: complexity and algorithms [1] H. Abdollahzadeh Ahangar, Maximal Roman domination numbers in graphs, Util. Math. 103 (2017), 245–258. [2] H. Abdollahzadeh Ahangar, M.P. Alvarez, M. Chellali, S.M. Sheikholeslami, and J.C. Valenzuela-Tripodoro, Triple Roman domination in graphs, Applied Math. Comput. 391 (2021), Article ID: 125444. [3] H. Abdollahzadeh Ahangar, J. Amjadi, M. Chellali, S. Nazari-Moghaddam, and S.M. Sheikholeslami, Total Roman reinforcement in graphs., Discuss. Mathe. Graph Theory 39 (2019), no. 4, 787–803. [4] H. Abdollahzadeh Ahangar, M. Chellali, D. Kuziak, and V. Samodivkin, On maximal Roman domination in graphs, Int. J. Comput. Math. 93 (2016), no. 7, 1093–1102. [5] H. Abdollahzadeh Ahangar, M. Chellali, S.M. Sheikholeslami, M. Soroudi, and L. Volkmann, Total vertex-edge domination in trees, Acta Math. Univ. Comenianae 90 (2021), no. 2, 127–143. [6] R. Boutrig, M. Chellali, T.W. Haynes, and S.T. Hedetniemi, Vertex-edge domination in graphs, Aequationes Math. 90 (2016), no. 2, 355–366. [7] A. Brandstädt, V.B. Le, and J.P. Spinrad, Graph classes: A survey, SIAM, Philadelphia, 2004. [8] E.J. Cockayne, P.A. Dreyer Jr, S.M. Hedetniemi, and S.T. Hedetniemi, Roman domination in graphs, Discrete Math. 278 (2004), no. 1-3, 11–22. [9] B. Courcelle, The monadic second-order logic of graphs. I. Recognizable sets of finite graphs, Information and Computation 85 (1990), no. 1, 12–75. [10] M.R. Garey and D.S. Johnson, Computers and Interactability : A Guide to the Theory of NP-completeness, Freeman, New York, 1979. [11] T.W. Haynes, S.T. Hedetniemi, and P.J. Slater, Domination in Graphs: Advanced Topics, Marcel Dekker Inc. New York, 1998. [12] T.W. Haynes, S.T. Hedetniemi, and P.J. Slater, Fundamentals of Domination in Graphs, Marcel Dekker Inc. New York, [13] M.A. Henning and A. Pandey, Algorithmic aspects of semitotal domination in graphs, Theor. Comput. Sci. 766 (2019), 46–57. [14] B. Krishnakumari, Y. B. Venkatakrishnan, and M. Krzywkowski, Bounds on the vertex–edge domination number of a tree, Comptes Rendus Mathematique 352 (2014), no. 5, 363–366. [15] H.N. Kumar and Y.B. Venkata Krishna, Vertex-edge Roman domination in graphs, Kragujevac J. Math. 45 (2021), no. 5, 685–698. [16] C.E. Leiserson, R.L. Rivest, T.H. Cormen, and C. Stein, Introduction to Algorithms, MIT press Cambridge, MA, 2001. [17] J. Lewis, S.T. Hedetniemi, T.W. Haynes, and G.H. Fricke, Vertex-edge domination, Util. Math. 81 (2010), 193–213. [18] J.R. Lewis, Vertex-Edge and Edge-Vertex Domination in Graphs, Ph.D. thesis, Clemson University, Clemson, 2007. [19] M.-S. Lin and C.-M. Chen, Counting independent sets in tree convex bipartite graphs, Discrete Appl. Math. 218 (2017), 113–122. [20] N. Mahadev and U. Peled, Threshold Graphs and Related Topics, Elsevier, 1995. [21] A. Oganian and J. Domingo-Ferrer, On the complexity of optimal microaggregation for statistical disclosure control, Statistical Journal of the United Nations Economic Commission for Europe 18 (2001), no. 4, 345–353. [22] B.S. Panda and A. Pandey, Algorithm and hardness results for outer-connected dominating set in graphs, J. Graph Algorithms Appl. 18 (2014), 493–513. [23] B.S. Panda, A. Pandey, and S. Paul, Algorithmic aspects of b-disjunctive domination in graphs, J. Comb. Optim. 36 (2018), no. 2, 572–590. [24] S. Paul and K. Ranjan, On vertex-edge and independent vertex-edge domination, International Conference on Combinatorial Optimization and Applications, Springer, 2019, pp. 437–448. [25] K.W. Peters, Theoretical and Algorithmic Results on Domination and Connectivity, Ph.D. thesis, Clemson University, Clemson, 1986. [26] E.N. Satheesh, Some Variations of Domination and Applications, Ph.D. thesis, Mahatma Gandhi University, 2014. [27] R. Uehara and Y. Uno, Efficient algorithms for the longest path problem, International Symposium on Algorithms and Computation, Springer, 2004, pp. 871–883. [28] D.B. West, Introduction to Graph Theory, Upper Saddle River: Prentice Hall, 2001. [29] M. Yannakakis, Node-and edge-deletion NP-complete problems, Proceedings of the tenth annual ACM symposium on Theory of computing, 1978, pp. 253–264. [30] R. Ziemann and P. Zyliński, ˙ Vertex-edge domination in cubic graphs, Discrete Math. 343 (2020), no. 11, Article ID: 112075.
{"url":"https://comb-opt.azaruniv.ac.ir/article_14280.html","timestamp":"2024-11-03T15:19:59Z","content_type":"text/html","content_length":"54141","record_id":"<urn:uuid:e2bbc0be-8d30-4cab-8b45-ea98ddc57520>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00828.warc.gz"}
How does the shape of the coil affect wireless power transfer? Coil shapes have played a profound role in the performance of wireless power transfer, especially with the coupling coefficient, output power, and energy transmission. We will examine how square, circular, and pentagonal coils have fared against each other. While there are many more shapes, these three form the basis of other shapes and, therefore, require careful consideration. Square-shaped coils for wireless power transfer Square coils (Figure 1) can provide a better coupling coefficient over various distances between the transmitter and receiver coil. These coils perform better under misalignment conditions; therefore, they are the preferred structure where precise alignment is challenging. Figure 1. An illustration of a square-shaped coil used in wireless power transfer (Image: Energies, MDPI) One major drawback with this shape is the perpendicular bendings, which result in higher resistive losses than other shapes. Even when designing PCBs, sharp bends are usually avoided to remove resistances. Therefore, the same principle applies to square coils for wireless power transfer. Rectangular coils can be taken as a variant of the square coils, which find applications in specific cases where the product shape and size determine the use case. However, Figure 1 shows these coils have a much lesser coupling coefficient than square and circular ones. Spiral-shaped coils for wireless power transfer Spiral or circular coils (Figure 2) benefit from a uniform magnetic field, as they avoid sharp curves like square or rectangular coils. Due to their geometry, spiral or circular coils take minimal cover and are often preferred in compact products such as smartwatches and mobile phones. The coils can be easily customized in size and number of turns. Therefore, they are preferred during the initial stages of research and development of wireless charging products. Figure 2. An illustration of a spiral-shaped coil used in wireless power transfer (Image: Energies, MDPI) The same geometry of the spiral coils also poses various challenges. They are more sensitive to misalignment because they have a smaller coverage area than square and rectangular coils. The degree of alignment for spiral coils is critical for the best power transfer efficiency. Pentagonal-shaped coils for wireless power transfer Pentagonal coils (Figure 3) blend circular and square coils, providing a unique compromise in space utilization and design adaptability for specific applications. Their magnetic field distribution uses a wider cover area as with square coils while trying to achieve a smoother shape like the circular coil. Figure 3. An illustration of a pentagonal-shaped coil used in wireless power transfer (Image: AIP Publishing) However, the design of a pentagonal coil is tricky and requires more care than its counterparts. The spacing between turns has to be uniform over the entire length, and the curves must be aligned. Their design usually has a trade-off between misalignment and coupling coefficient. Case study Here is a case study comparing the performances of square, spiral, and pentagonal coils due to the variations in distances between the coils. During the study, the surface area of the coils is kept at 110-120 mm^2. The spiral coil was noted to have 15 turns, while the pentagonal and square coils were kept at 14 turns. Note that both the transmitter and received coils are of the same shape. Figure 4 shows how the increased distance between the coils decreases output power, energy efficiency, and coupling coefficiency. This is obvious to any electrical engineer. However, the spiral coil has an interesting pattern. Figure 4. Effect of variations in the distance on the performance of the square, spiral, and pentagonal-shaped coils (a) output power, (b) energy transmission efficiency, (c) coupling coefficiency (Image: Energies, MDPI) From all three parameters, one can observe that the spiral coil takes a noticeable sag when the distance between the coils reaches 20 mm. After that, the curve tends to be linear. Another interesting observation is the relationship between the square and spiral coils. The performance starts at the same point, 10 mm, and ends at nearly the same point, 40 mm. However, the differences bulge at 20 mm and then converge. However, the three graphs clearly show that the pentagonal coil outperforms its counterparts by a large margin, at least during the first half. When the distance reaches the 30 to 40 mm range, the performance differences shrink, especially for output power and energy transmission efficiency. Engineers should note that the graph curves and the ranges in Figure 4 for all three coil shapes are applicable for the assumed number of turns and surface area of coils. Hence, when making a larger or smaller surface area of the coil with changes in the number of turns, the graphs are expected to change, especially on the x-axis for the distance range. Therefore, this is a good starting point if you consider expanding the study. Understanding the performance of square, circular, and pentagonal coils gives us fundamental knowledge that can be expanded to other derivative shapes. In a case study, we have seen that pentagonal coils performed better than their counterparts for various parameters. However, due to their geometry, pentagonal coils require better design understanding, which can be challenging. Design and Analysis of Magnetic Coils for Optimizing the Coupling Coefficient in an Electric Vehicle Wireless Power Transfer System, Energies, MDPI Study of the Circular Flat Spiral Coil Structure Effect on Wireless Power Transfer System Performance, Energies, MDPI A polygonal double-layer coil design for high-efficiency wireless power transfer, AIP Publishing Wireless Power Transfer—A Review, Energies, MDPI Analysis on Shape and Geometry Effects of Primary Secondary Coils for Dynamic Wireless Power Transfer System, International Journal of Intelligent Systems and Applications in Engineering
{"url":"https://www.powerelectronictips.com/how-does-the-shape-of-the-coil-affect-wireless-power-transfer/","timestamp":"2024-11-12T12:45:53Z","content_type":"text/html","content_length":"103401","record_id":"<urn:uuid:937ed166-9262-4c9f-a4f3-3fd7d7e4b606>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00516.warc.gz"}
How can number lines be used by teachers? How can number lines be used by teachers? Although teachers today have many options for modeling mathematics, the number line is still an important and useful math tool. While cubes and other manipulatives support learning by representing numbers, the number line actually develops learning by giving students another look at number relationships. How do you introduce a number line to a student? Basic Number Line Skills 1. Find the dot marked on a number (and name the dot) 2. Place a dot at the desired location. 3. Find the desired number below the number line’s tick mark* 4. Add the missing number. 5. Learn the numbers to the right are higher than numbers to left. 6. Determine which number is greater than, less than or equals. How is the number line displayed? A number line can be extended infinitely in any direction and is usually represented horizontally. The numbers on the number line increase as one moves from left to right and decrease on moving from right to left. Here, fractions and decimals have been represented on the number line. What is the purpose of number line? A number line is defined as the pictorial representation of numbers such as fractions, integers and whole numbers laid out evenly on a straight horizontal line. A number line can be used as a tool for comparing and ordering numbers and also performing operations such as addition and subtraction. What is an interactive number line? It has been designed for use on an interactive whiteboard. Teachers. Number Line. Number Line helps students visualise number sequences and illustrate strategies for counting, comparing or the four operations. Number lines can be easily adapted with whole numbers, fractions, decimals, or negative numbers. How do you make a number line in Google Docs? After you signed into your Google account, you can finally start using the extension! Click on the Line Numbering icon once again. To enable line numbering, select the ‘Show line numbering’ option then click on ‘Apply’ to reflect these changes. You should now see the line numbers appear in your document.
{"url":"https://www.kingfisherbeerusa.com/how-can-number-lines-be-used-by-teachers/","timestamp":"2024-11-06T23:22:55Z","content_type":"text/html","content_length":"41607","record_id":"<urn:uuid:4dd250d9-0c66-4a0a-839e-1d46f17719fc>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00274.warc.gz"}
Using conditional types to port javascript code to typescript Project Page Today I finished porting all of the source files in my Tetris Attack remake to Typescript. Most of the work was similar to my last post on the topic, but I did run into an interesting type system issue which I figured would be interesting to talk about. Javascript Version The function in question was the gridToScreen function used for transforming block location information from grid space to screen space. The original function looked like this: export function gridToScreen(location) { let result = {}; if (location.position) { let blocksTopLeft = new Vector( gridCenter.x - gridDimensions.width / 2, gridCenter.y - gridDimensions.height / 2 + blockPixelAdvancement); result.position = blocksTopLeft.add( .multiplyParts(new Vector(1, -1))); if (location.dimensions) { result.dimensions = location.dimensions.multiply(blockWidth); return result; Its a pretty simple function which creates a return object with transformed variables depending upon which properties exist on the incoming object. This pattern is somewhat common in dynamic programming languages because you can group a series of operations that are done sometimes together or sometimes separately into one unit. Unfortunately with traditional type systems this can be difficult to handle properly. Naive Approach Standard type annotations for the argument might look like this: interface Location { position?: Vector, dimensions?: Vector export function gridToScreen(location: Location) { let result = {} as Location; if (location.position) { let blocksTopLeft = new Vector( gridCenter.x - gridDimensions.width / 2, gridCenter.y - gridDimensions.height / 2 + blockPixelAdvancement); result.position = blocksTopLeft.add( .multiplyParts(new Vector(1, -1))); if (location.dimensions) { result.dimensions = location.dimensions.multiply(blockWidth); return result; This compiles fine, but we run into problems if we want to use properties in the output of the function. for (let coveredSlot of this.coveredSlots.values()) { let renderInfo = gridToScreen({ position: coveredSlot, dimensions: Vector.one imageUrl: garbageImages.Clear, center: Vector.topLeft, For example in the render function on the ClearAnimation class we get a compiler error complaining that the image function argument does not contain the position and dimensions properties. The compiler has no way to guarantee that the properties on renderInfo are actually there. Set-ish Types To fix this issue and help the type system along we need to take advantage of some more advanced type system features in the recent versions of Typescript. But first, some background terminology. Typescript contains two concepts that have names related to Set operations, but are a bit misleading: Union and Intersection types. The Union of two types in Typescript produces a new type containing all of the properties of each of the types combined. Similarly the intersection of two types produces a new type with either the properties of the first object or the properties of the second. The union type makes good sense since a union of two sets contains all of the elements that exist in one of either of the sets. (A, B, C) Union (C, D, E) Equals (A, B, C, D) The intersection type is weird though because a valid element inhabiting the intersection between two overlapping types has no guarantee about what properties exist on it. In normal set theory terms: (A, B, C) Intersection (C, D, E) Equals (C) But in typescript it means that the final object could be the first type or the second type. It could be me, but I find this somewhat confusing. Dependent er... Conditional Types to the Rescue Luckily modern Typescript gives a way to define our own versions of these ideas. In my case I need a type which truly is the "intersection" of two types which has the common properties between the two. To do this I use type conditions to specify the constrain I have in mind. export type Common<A, B> = { [P in keyof A & keyof B]: A[P] | B[P] The syntax is a little bit weird, but in English this says the following: Define the Common of two types, A and B as A new type with Keys such that every key P exists in both A and B, And values that are either the type of A[P] or B[P] In summary, do something closer to the Set union of two bags of properties. The last bit of useful information before I show the final solution is the existence of a Partial type which is another bit of fancy Typescript type shenanigans which just takes a type and creates a new type where each of the properties are optional. It is defined as such: export type Filter<T> = { [P in keyof T]?: T[P] In this form you can see the structure of a mapped type or conditional type a little easier. Its just a way to specify properties in terms of the properties on other types. Better Type Annotations With our new found fancy types in hand, the more expressive version of gridToScreen type annotations is pretty simple: interface Location { position: Vector, dimensions: Vector export function gridToScreen<T extends Partial<Location>>(location: T) { let result = {} as Common<T, Location>; if ("position" in location) { let blocksTopLeft = new Vector( gridCenter.x - gridDimensions.width / 2, gridCenter.y - gridDimensions.height / 2 + blockPixelAdvancement); result.position = blocksTopLeft.add( .multiplyParts(new Vector(1, -1))); if ("dimensions" in location) { result.dimensions = location.dimensions.multiply(blockWidth); return result; First step was to specify that the properties on the input argument are optional using the Partial mapped type. Then the type of the result is simply the common properties from the passed in argument and the location type itself. So if the object passed in only contains the Position property, then the result type will only contain Position as well since the only common properties are Position. The only slightly confusing bit was that I had to modify the if statements to use the in operator to check for the existence of the properties so the type system can be confident that the position property actually exists on the argument at runtime. And thats really it! My ClearAnimation render function doesn't need changed at all because the Types provide proof that the correct arguments are available when I expect them to be. I'm incredibly pleased that the type system in Typescript continues to get more and more expressive. This is just the smallest baby step toward more complicated proofs in software, but any progress is commendable. Heres to hoping for full fledged Pi types in the future! Till tomorrow,
{"url":"https://kaylees.dev/trio/oak/day68-conditional-types-in-typescript/","timestamp":"2024-11-07T09:39:18Z","content_type":"text/html","content_length":"10754","record_id":"<urn:uuid:5263bc97-164c-4337-a982-903f9e5a6e91>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00367.warc.gz"}
Multiple IF statements/Managing large formula. Hello Smartsheets Community, I was curious if anyone had some input. I've developed a sheet that runs different formulas for different "vendors" in a column to get a final price based on the vendors data. It's become a very large IF statement as there's many vendors now (if vendor is Vendor 1, *do this* etc). I'd like to make an easier way to add vendors into this formula as we get new ones throughout the year. Is it at all possible to say, have a sheet that I could enter the vendors and data into, and "match" a formula onto another sheet instead of using a long formula? For example, instead of writing a formula for each vendor in one large formula, be able to call on a different formula for each vendor in another way? Hopefully that makes sense, looking to make it easier for other team members to add vendors into the formula. May or may not be possible. Thanks for any thoughts! • This is mostly possible, with some tweaks. Much of it depends on what your formulas are doing, where they are getting data values from, etc. Say Vendor A has a price multiplier of .71, and Vendor B has a multiplier of .75. A lookup sheet listing this data in Vendor and Multiplier columns could be useful in calculating prices on an order sheet: Vendor SKU Qty Price Total Vendor A Widget001 2 $8715.00 The Total column could use an INDEX/MATCH to bring in the multiplier value and apply it to Qty x Price: =Qty@row * Price@row * INDEX({Lookup Sheet Multiplier Column Range}, MATCH(Vendor@row, {Lookup Sheet Vendor Column Range}, 0)) The result is a total of $12,375.30 If you could share some screenshots, specifics of what your formulas need to do, I could help you with the logic and syntax. Jeff Reisman Link: Smartsheet Functions Help Pages Link: Smartsheet Formula Error Messages If my answer helped solve your issue, please mark it as accepted so that other users can find it later. Thanks! • Thank you for your detailed reply I appreciate it. Going to work with this formula and see if I can tweak it for what I need. Some vendors require slightly different formulas, so if I can't get it to work I'll forward some images of my project! Thanks again for your time. • Did you ever get any further with this? I have a similar issue, where we want people to see their carbon emissions from choosing one place to the next. Having a formula with one origin alone to all the destinations is large enough, not sure of the best way to have multiple origins with the same destinations? Did you use a reference sheet? It was too much for the AI assist too, it seems Help Article Resources
{"url":"https://community.smartsheet.com/discussion/101207/multiple-if-statements-managing-large-formula","timestamp":"2024-11-03T12:48:53Z","content_type":"text/html","content_length":"433175","record_id":"<urn:uuid:8fb43ab0-b181-4c67-badd-b977d10cd88a>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00337.warc.gz"}
Valid Values of L Vector Entry Selection Matrix Entries Matrix and Vector Entry Selection In the LinearAlgebra package, many functions that operate on Matrices and Vectors include an "element, row or column selection" parameter L in their calling sequence. The parameter L can be one of an integer, a range of integers, or a list of integers and/or ranges of integers. By using one of these types for L, you can select one row (column), a range of rows (columns), or a list of rows (columns) from a Matrix, respectively. Similarly, you can select one element, a range of elements, or a list of elements from a Vector. In the following sections, V is a Vector and A is a Matrix. Valid Values of LVector Entry SelectionMatrix Entries Permissible Integer Values If L is an integer, it must be non-zero since the indices of Matrices and Vectors always start from 1, and |L| must be between 1 and the corresponding dimension value, inclusive. If L is a negative integer, the selection proceeds as follows: -1 selects the last row (column) or element, -2 selects the second last row (column) or element, -3 selects the third last row (column) or element, etc. Errors and the Degenerate Case for Ranges If L includes the range i..j with i = j+1 and 1 <= i <= dim + 1, where dim is the corresponding dimension value, then the result is degenerate in the corresponding dimension (the dimension value in that component is 0). This is not an error. Otherwise, if L includes either an empty set of indices or indices outside the appropriate dimension range, it is an error. If a LinearAlgebra function operates on a Vector and includes L in its calling sequence, the entries specified by L are returned in the following manner. If L is a positive integer, the Lth element of the Vector is selected. If L is a negative integer, the element selected is as described in the Valid Values of L section above. If L is a range i..j of integers that is not degenerate, the elements V[i] through V[j] of the Vector are selected. If L is a list of integers and/or non-degenerate ranges with valid integer endpoints, the elements corresponding to the integer values in L are selected. If a LinearAlgebra function operates on a Matrix and includes L in its calling sequence, the entries specified by L are returned in the following manner. If L is a positive integer, the Lth row (column) of the Matrix is selected. If L is a negative integer, the row (column) selected is as described in the Valid Values of L section above. If L is a range i..j of integers that is not degenerate, the rows (columns) i through j of the Matrix are selected. If L is a list of integers and/or non-degenerate ranges with valid integer endpoints, the rows (columns) corresponding to the integer values in L are selected. See AlsoLinearAlgebra Package IndexLinearAlgebra[SubMatrix]LinearAlgebra[SubVector]listsMatrixtype[MVIndex]Vector
{"url":"https://de.maplesoft.com/support/help/content/5452/LinearAlgebra-General-MVselect.mw","timestamp":"2024-11-04T04:51:20Z","content_type":"application/xml","content_length":"20226","record_id":"<urn:uuid:0bff2812-a2d0-47be-9623-24bc411ec9a5>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00039.warc.gz"}
2000 Archives - Clay Mathematics Institute Arthur Jaffe was President of the Clay Mathematics Institute 1998-2002. The 2000 Clay Research Award was made to Laurent Lafforgue for his work on the Langlands program. The 2000 Clay Research Award was made to Alain Connes for revolutionizing the field of operator alebras, for inventing modern non-commutative geometry, and for discovering that these ideas appear everywhere, including the foundations of theoretical physics. Dennis Gaitsgory received his PhD from Tel Aviv University in 1997 under the supervision of Joseph Bernstein. His thesis, Automorphic sheaves and Eisenstein series, develops ideas of A. Beilinson, V. Drinfeld and G. Laumon that deal with the geometrization of the theory of automorphic forms. Dennis was appointed as a Clay Research Fellow for a term of four years beginning 2000. Manjul Bhargava received his PhD from Princeton University in 2001 under the supervision of Andrew Wiles. His research interests span algebraic number theory, combinatorics, and representation theory. Bhargava’s research includes fundamental contributions to the representation theory of quadratic forms and to p-adic analysis, as wellas to the representation theory of quadratic forms, to interpolation problems and p-adic analysis, and to the study of ideal class groups of algebraic number fields. Manjul was appointed as a Clay Research Fellow for a term of five years beginning July The 2005 Clay Research Award was made to Manjul for his discovery of new composition laws for quadratic forms and for his work on the average size of ideal class groups. These videos document the Institute’s landmark Paris millennium event which took place on May 24-25, 2000, at the Collège de France. On this occasion, CMI unveiled the “Millennium Prize Problems,” seven mathematical quandaries that have long resisted solution. The announcement in Paris honored the 100-year anniversary of David Hilbert’s address of 1900 to the International Congress of Mathematicians in Paris, in which he outlined 23 mathematics problems that set the tone for much 20th century mathematical research.
{"url":"https://www.claymath.org/year_type/2000/","timestamp":"2024-11-12T16:05:02Z","content_type":"text/html","content_length":"97012","record_id":"<urn:uuid:47bb8a74-008f-44c0-80be-a9177fbdb089>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00753.warc.gz"}
Print all k-sum paths in a binary tree Binary Trees are the indispensable data structures of the interviews. Mostly because they are hard to understand and can change the complexity drastically; usually from O(n) to O(log(n)). Trees are the representatives of divide and conquer approach in the data structures, open to mind confusing recursive algorithms. A binary tree with integer data and a number k are given. Print every path in the tree with sum of the nodes in the path as k. A path can start from any node and end at any node, i.e. they need not be root node and leaf node; and negative numbers can also be there in the tree. here is where I first saw the problem : http://www.geeksforgeeks.org/amazon-interview-experience-set-323-software-development-engineer-off-campus/ but couldn’t understand it and found http://www.geeksforgeeks.org/print-k-sum-paths-binary-tree/ where I copied the problem definition from. taken from http://www.geeksforgeeks.org/print-k-sum-paths-binary-tree/ My approach is basic, first think about a Tree with root only, if its value is equal to the given value then it is a solution. Now lets add a left and a right node. In this case as we go deeper on the tree we need to look for; • starting from root, is the sum equals to the given value • starting from the left is the sum equals to the given value and same for right Hard part is collecting the correct path information. The BST in solution hold lower values on the left and BST may contain negative values. Here is the code; void findSum(Integer originalSum, Integer sum, List<List<Integer>> result, List<Integer> currentList, Node node) { if (node == null) return; Integer nodeValue = node.value; if (Objects.equals(sum, nodeValue)) { List<Integer> resultL = new ArrayList(currentList); // as the BST may contain negative values we have to iterate it all findSum(originalSum, originalSum, result, new ArrayList(), node.right); findSum(originalSum, originalSum, result, new ArrayList(), node.left); int remaining = sum - nodeValue; findSum(originalSum, remaining, result, new ArrayList(currentList), node.left); findSum(originalSum, remaining, result, new ArrayList(currentList), node.right); public void findSum(Integer sum) { List<List<Integer>> result = new ArrayList<>(); findSum(sum, sum, result, new ArrayList<>(), root); The complexity of my solution is O(n^2) hope it is not a concern for you 🙂 I know my solution is terrible with brute-force but having negative values is the problem here, besides that the long signature is also terrifying but we have to carry the original value to start testing from any node. By the way solution on http://www.geeksforgeeks.org/print-k-sum-paths-binary-tree/ is not working or I couldn’t make it work for some reason.
{"url":"http://hevi.info/interviews/print-all-k-sum-paths-in-a-binary-tree/","timestamp":"2024-11-14T14:21:36Z","content_type":"text/html","content_length":"45256","record_id":"<urn:uuid:f2def293-4118-4d45-87ab-e9e83e26aea5>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00521.warc.gz"}
Evolution of Integer Multiplication Open-Source Internship opportunity by OpenGenus for programmers. Apply now. We started with an O(N^2) time Integer Multiplication algorithm and it was the first time ever in 1960 that we developed an faster Integer Multiplication algorithm which ran at O(N^1.58) time and it was proved (in 1970) that the fastest possible algorithm would run at O(N logN) time. Note that all complexity is for multiplication of 2 N-digit numbers. It took us over 60 years to go from O(N^2) to O(N logN) but it has been an interesting journey. We are living in an important time as this could be one of the few fundamental topics that we are able to optimize to the limit and understand it deeply. In 1971, Schonhage Strassen algorithm was developed which ran at O(N * logN * loglogN) time and held the record for 36 years before being beaten by Furer's algorithm in 2007. From then, progress has been constant which a O(N logN) algorithm discovered in March 2019 which is the possible end of this human quest. Following summarizes the algorithms that defined this era: Algorithm Complexity Year Notes School Multiplication O(N^2) 100 BC - Russian Peasant Method O(N^2 * logN) 1000 AD - Karatsuba algorithm O(N^1.58) 1960 - Toom Cook multiplication O(N^1.46) 1963 - Schonhage Strassen algorithm O(N * logN * loglogN) 1971 FFT Furer's algorithm O(N * logN * 2^O(log*N)) 2007 - DKSS Algorithm O(N * logN * 2^O(log*N)) 2008 Modular arithmetic Harvey, Hoeven, Lecerf O(N * logN * 2^3 log*N) 2015 Mersenne primes Covanov and Thomé O(N * logN * 2^2 log*N) 2015 Fermat primes Harvey and van der Hoeven O(N * logN) March 2019 Possible end Note that the time complexity is for multiplying two N digit numbers. Basic approaches O(N^2) Integer Multiplication starts with the basic approach that is taught in school that has a time complexity of O(N^2). Though it took a significant time improve the time complexity but this did not stop us to make improvements over this. One improvement was to reduce multiplication to N additions as in computing systems, addition is much faster. This may not be true in modern systems as several optimization come into picture. Another approach was to do multiplication in powers of two. Though it increases the time complexity but the real performance is good as multiplication by 2 is done using a left shift operation which takes constant O(1) time. Read more about Russian Peasant Algorithm at OpenGenus. There are several other variants but the real progress was being made from 1960s where we discovered several great algorithms over the next half century. 1960s: O(N^2) to O(N^1.58) to O(N^1.46) Everything started with Karatsuba algorithm which was the first algorithm to show that Integer Multiplication can be done faster than O(N ^2). It was at a time when scientists where stuck in this Karatsuba algorithm was discovered in 1960 by Anatoly Karatsuba. It was based on a divide and conquer approach and had a time complexity of O(N^1.58). The next progress was made quickly in 1963 by coming up with Toom Cook multiplication. It had a time complexity of O(N^1.46). This field was in active research and the news of recent progress spread like wild fire even in those days of no internet. It took around 7 years to make a huge impact in the field which shook everyone for decades. 1970s: The beginning of a new challenge Schonhage Strassen algorithm is one of the greatest progress made in this domain of Integer Multiplication. It was formulated in 1971 and remained the fatest Integer Multiplication algorithm for over 36 years. It has a time complexity of O(N * logN * loglogN) and uses the idea of Fast Forier Transform. Though this was a major step, it took several years to improve over this. This gave a sense that the domain was getting too complicated to tackle but as we know, we eventually did it. 2007: Break-through Furer's algorithm was a major breakthrough as no fundamental progress was made from 1971 to 2007. It showed that further progress is possible. It improved the time complexity to O(N * logN * 2^O It improved the loglogN part of Schonhage Strassen algorithm which is true for large numbers such as 2^2^64. Despite this, it remain of theoretical interest only because of several significant challenges to make its use in practical applications. This opened up a whole new interest in the domain and in the following years several optimizations where proposed but none made it suitable for practical use. Hence, Schonhage Strassen algorithm continued to be used in all practical uses. Further improvements till O(N logN) Several improvements on Furer's Algorithms have been done since 2007. DKSS Algorithm was a notable approach as it achieved the same time complexity as Furer's algorithm. It relied on modular arithmetic and is simpler. It came out in 2008 and has a time complexity of O (N * logN * 2^O(log*N)). This is faster than Schonhage Strassen algorithm for numbers greater than 10^10^4796. In 2015, Harvey, Hoeven, Lecerf came up with an algorithm with a better bounded constant as compared to Furer's Algorithm. It relied on Messene primes and had a time complexity of O(N * logN * 2^3 log*N) where the constant is 3 while in Furer's algorithm, it is not bounded and can be larger like 8. Soon, Covanov and Thomé in the same year 2015, came up with another algorithm based on Fermat Primes and improved the constant factor to 2. The time complexity improved to O(N * logN * 2^2 log*N). Despite these improvements, the algorithms were not suitable for practical use and minimal improvements were being made. To a positive note, we have several algorithms with different basic ideas. The major progress have been made in March 2019 by Harvey and van der Hoeven. They have proposed an algorithm with time complexity of O(N logN). This is significant as in 1971, Volker Strassen said that the possible best complexity for Integer Multiplication should be O(N logN) and we have reached the end. It is being verified. Though several different approaches will come in years to follow, we have come a long way and optimized this fundamental operation to its limit. Read some papers on Integer Multiplication All Research papers on Integer Multiplication (1960 to 2019) by OpenGenus DKSS Algorithm (PDF) in 2008 Harvey, Hoeven, Lecerf Algorithm (PDF) in 2015 With this, you have the complete overall knowledge of this fundamental domain of Integer Multiplication. Enjoy and keep learning.
{"url":"https://iq.opengenus.org/evolution-of-integer-multiplication/","timestamp":"2024-11-14T11:31:57Z","content_type":"text/html","content_length":"63214","record_id":"<urn:uuid:a2ced186-2237-4408-8e84-386ab921f7ee>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00697.warc.gz"}
All Cement Price List | Cement Price Per Bag | Cement Price How Cement Prices Influence Construction Costs: A Comprehensive Overview of Cement Grades and Rates in India OPC Cement Price List Today  OPC 53 Grade Cement Price List Today is given in the table, Cement Brand Grade of Cement Price (Rs.) Ultratect Cement 53 Grade OPC Rs. 410 Ambuja Cement 53 Grade OPC Rs. 469 ACC Cement 53 Grade OPC Rs. 435 Birla Cement 53 Grade OPC Rs. 465 JK Lakshmi Cement 53 Grade OPC Rs. 340 Dalmia Cement 53 Grade OPC Rs. 415 Jaypee Cement 53 Grade OPC Rs. 440 Shree Cement 53 Grade OPC Rs. 435 Banger Cement 53 Grade OPC Rs. 460 Coromandel Cement 53 Grade OPC Rs. 415 Priya Cement 53 Grade OPC Rs. 495 Ramco Cement 53 Grade OPC Rs. 410 Sanghi Cement 53 Grade OPC Rs. 400 Hathi Cement 53 GradeOPC Rs. 492 OPC 43 Grade Cement Price List  [year] OPC 43 Grade Cement Price List Today is given in the table, Cement Brand Grade of Cement Price (Rs.) Ultratect Cement 43 Grade OPC Rs. 407 Ambuja Cement 43 Grade OPC Rs. 430 ACC Cement 43 Grade OPC Rs. 485 Birla Cement 43 Grade OPC Rs. 475 JK Lakshmi Cement 43 Grade OPC Rs. 490 Dalmia Cement 43 Grade OPC Rs. 450 Jaypee Cement 43 Grade OPC Rs. 460 Shree Cement 43 Grade OPC Rs. 465 Banger Cement 43 Grade OPC Rs. 465 Coromandel Cement 43 Grade OPC Rs. 415 Priya Cement 43 Grade OPC Rs. 450 Ramco Cement 43 Grade OPC Rs. 495 Sanghi Cement 43 Grade OPC Rs. 465 Hathi Cement 43 Grade OPC Rs. 485 OPC 33 Grade Cement Price List 33-grade OPC cement price list is given in the table below, Cement Brand Grade of Cement Price (Rs.) Ultratect Cement 33 Grade OPC Rs. 490 Ambuja Cement 33 Grade OPC Rs. 440 ACC Cement 33 Grade OPC Rs. 470 Birla Cement 33 Grade OPC Rs. 475 JK Lakshmi Cement 33 Grade OPC Rs. 490 Dalmia Cement 33 Grade OPC Rs. 445 Jaypee Cement 33 Grade OPC – Shree Cement 33 Grade OPC – Banger Cement 33 Grade OPC Rs. 450 Coromandel Cement 33 Grade OPC Rs. 450 Priya Cement 33 Grade OPC Rs. 465 Ramco Cement 33 Grade OPC Rs. 475 Sanghi Cement 33 Grade OPC Rs. 405 Hathi Cement 33 Grade OPC Rs. 465 One thing to keep in mind when choosing between OPC and PPC is how many days you want to remove the formwork. Concrete mixes made from PPC take longer to set up than OPC. So choose accordingly. All Cement Rate List [monthyear] OPC 53 Grade Cement Price List Today is given in the table, Cement Brand Grade of Cement Price (Rs.) Ultratect Cement 53 Grade OPC Rs. 410 Ambuja Cement 53 Grade OPC Rs. 469 ACC Cement 53 Grade OPC Rs. 435 Birla Cement 53 Grade OPC Rs. 465 JK Lakshmi Cement 53 Grade OPC Rs. 340 Dalmia Cement 53 Grade OPC Rs. 415 Jaypee Cement 53 Grade OPC Rs. 440 Shree Cement 53 Grade OPC Rs. 435 Banger Cement 53 Grade OPC Rs. 460 Coromandel Cement 53 Grade OPC Rs. 415 Priya Cement 53 Grade OPC Rs. 495 Ramco Cement 53 Grade OPC Rs. 410 Sanghi Cement 53 Grade OPC Rs. 400 Hathi Cement 53 Grade OPC Rs. 492 All Cement Price List [monthyear] OPC 43 Grade Cement Price List Today is given in the table, Cement Brand Grade of Cement Price (Rs.) Ultratect Cement 43 Grade OPC Rs. 407 Ambuja Cement 43 Grade OPC Rs. 430 ACC Cement 43 Grade OPC Rs. 485 Birla Cement 43 Grade OPC Rs. 475 JK Lakshmi Cement 43 Grade OPC Rs. 490 Dalmia Cement 43 Grade OPC Rs. 450 Jaypee Cement 43 Grade OPC Rs. 460 Shree Cement 43 Grade OPC Rs. 465 Banger Cement 43 Grade OPC Rs. 465 Coromandel Cement 43 Grade OPC Rs. 415 Priya Cement 43 Grade OPC Rs. 450 Ramco Cement 43 Grade OPC Rs. 495 Sanghi Cement 43 Grade OPC Rs. 465 Hathi Cement 43 Grade OPC Rs. 485 Today Cement Price OPC 33 Grade Cement Price List [monthyear] is given in the table, Cement Brand Grade of Cement Price (Rs.) Ultratect Cement 33 Grade OPC Rs. 490 Ambuja Cement 33 Grade OPC Rs. 440 ACC Cement 33 Grade OPC Rs. 470 Birla Cement 33 Grade OPC Rs. 475 JK Lakshmi Cement 33 Grade OPC Rs. 490 Dalmia Cement 33 Grade OPC Rs. 445 Jaypee Cement 33 Grade OPC – Shree Cement 33 Grade OPC – Banger Cement 33 Grade OPC Rs. 450 Coromandel Cement 33 Grade OPC Rs. 450 Priya Cement 33 Grade OPC Rs. 465 Ramco Cement 33 Grade OPC Rs. 475 Sanghi Cement 33 Grade OPC Rs. 405 Hathi Cement 33 Grade OPC Rs. 465 Cement Price [monthyear] All cement price list [monthyear] 53 grade, PPC 53 Grade Cement Price List [monthyear] is given in the table, Cement Brand Grade of Cement Price (Rs.) Ultratect Cement 53 Grade PPC Rs. 465 Ambuja Cement 53 Grade PPC Rs. 468 ACC Cement 53 Grade PPC Rs. 400 Birla Cement 53 Grade PPC Rs. 440 JK Lakshmi Cement 53 Grade PPC Rs. 475 Dalmia Cement 53 Grade PPC Rs. 425 Jaypee Cement 53 Grade PPC Rs. 470 Shree Cement 53 Grade PPC Rs. 410 Banger Cement 53 Grade PPC Rs. 450 Coromandel Cement 53 Grade PPC Rs. 442 Priya Cement 53 Grade PPC Rs. 435 Ramco Cement 53 Grade PPC Rs. 470 Sanghi Cement 53 Grade PPC Rs. 420 Hathi Cement 53 Grade PPC Rs. 455 Cement Rate [monthyear] PPC 43 Grade Cement Price List [monthyear] is given in the table, all cement price list today 43 grade. Cement Brand Grade of Cement Price (Rs.) Ultratect Cement 43 Grade PPC Rs. 450 Ambuja Cement 43 Grade PPC Rs. 410 ACC Cement 43 Grade PPC Rs. 425 Birla Cement 43 Grade PPC Rs. 410 JK Lakshmi Cement 43 Grade PPC Rs. 496 Dalmia Cement 43 Grade PPC Rs. 427 Jaypee Cement 43 Grade PPC Rs. 420 Shree Cement 43 Grade PPC Rs. 460 Banger Cement 43 Grade PPC Rs. 405 Coromandel Cement 43 Grade PPC Rs. 450 Priya Cement 43 Grade PPC Rs. 465 Ramco Cement 43 Grade PPC Rs. 422 Sanghi Cement 43 Grade PPC Rs. 470 Hathi Cement 43 Grade PPC Rs. 496 All Cement Price List Today 2024 Rank Cement Company Rate 1 Dalmia Bharat Rs 310 / Bag 2 Mangalam Cement Rs 315 / Bag 3 NCL Industries Cement Rs 320 / Bag 4 KCP Cement Rs 325 / Bag 5 Shree Cement Rs 335 / Bag 6 Deccan Cement Rs 335 / Bag 7 Burnpur Cement Rs 335 / Bag 8 ACC Cement Rs 343 / Bag 9 JK Lakshmi Cement Rs 340 / Bag 10 Jaiprakash Asso Cement Rs 340 / Bag Cement Price per bag in India, All cement price list today in up. Frequently Asked Questions (FAQ) About Cement Prices and Usage in India What factors determine the price of cement in India? Cement prices in India are influenced by various factors such as demand-supply dynamics, production costs, transportation costs, government regulations, and taxes. What are the different grades of cement available in India? The most common grades of cement used in India are Ordinary Portland Cement (OPC) and Portland Pozzolana Cement (PPC). These are further classified into various strengths like 33 Grade, 43 Grade, and 53 Grade OPC, as well as 43 Grade and 53 Grade PPC. How does the choice of cement grade affect construction projects? The choice of cement grade depends on the specific requirements of the construction project. For instance, OPC 53 Grade cement is preferred for high-strength concrete applications such as in bridges and high-rise buildings, while PPC is suitable for general construction purposes. What are the advantages of using OPC cement? OPC cement offers advantages like faster initial setting time, higher resistance to cracking and shrinkage, and lower compliance-related burden. It is commonly used in projects where props need to be removed early and where curing costs need to be minimized. How does PPC cement differ from OPC? PPC cement is generally cheaper than OPC and offers benefits such as better workability, lower heat of hydration, and increased durability due to the formation of insoluble cemented ingredients. It is commonly used in masonry work and where longer setting times are preferred. What are some important considerations when choosing between OPC and PPC? When choosing between OPC and PPC, factors like setting time, curing requirements, and project specifications need to be taken into account. For example, PPC takes longer to set compared to OPC, so the formwork removal time should be considered accordingly. How does the price of cement impact construction costs? The price of cement significantly influences construction costs since it is a major component in various construction activities such as concrete work, masonry, plastering, and flooring. Fluctuations in cement prices can directly affect project budgets and profitability. What are the most commonly used cement brands in India? Some of the popular cement brands in India include Ultratect Cement, Ambuja Cement, ACC Cement, Birla Cement, JK Lakshmi Cement, Dalmia Cement, and Shree Cement, among others. How has the consumption of cement in India evolved over the years? The consumption volume of cement in India has shown steady growth over the years, reflecting the increasing demand for construction and infrastructure development in the country. Where can I find up-to-date information on cement prices and brands in India? Various sources provide up-to-date information on cement prices and brands in India, including industry reports, cement company websites, and online marketplaces specializing in construction materials. Additionally, local dealers and suppliers can offer current price lists and product availability. 1. Cement can be produced on a very large scale and can be handled on a controlled basis, packed, and transported over long distances. 2. Cement is 10 times stronger than clay and lime. 3. Cement is readily available in the market, so it can be easily combined with other locally available materials. 4. Cement can be kept stored at normal temperature, the duchess does not get damaged for a reasonably long team. 5. Cement is mixed with water they start gaining strength within 30 minutes, they reach maximum strength in 24 to 48 hours and they achieve 66.66% strength in 7 days and remaining strength in 28 6. When water is mixed with lime, it produces a lot of heat whereas in the case of cement when mixed with water it produces less heat of hydration compared to lime. 7. Cement can also resist compressive stress. 8. In the case of shear stress, it provides strength to the steel and a good bond to the steel to overcome the extra stress. Most Used Cement In India We have used 23 types of cement in which it has been used in different places. But we have used two types of cement communally. Which is as follows. 1. Ordinary Portland Cement (OPC) 2. Portland Pozzolana Cement (PPC) There are two types of cement. Usually, these two types are used in cement construction. 1. Ordinary Portland Cement (OPC) OPC cement is a commonly used cement. This cement is widely used in construction. This cement is the most popular choice. The advantages of using OPC cement are as below. 1. OPC have been given a number of exemptions and their fore have a lower compliance-related burden than a private limited company. 2. It has great resistance to cracking and shrinkage but has less resistance to chemical attacks. 3. The initial setting time of OPC is faster than PPC so it is recommended in projects where props are to be removed early. 4. The curing period of OPC is less than PPC and curing costs reduced. Hence recommended that curing are cost-prohibitive. Grade 43 and Grade 33 OPC cement is an old grade of cement used for residential construction and is now being replaced by OPC 53 grade cement. This OPC 53-grade cement is the best cement for 2. Portland Pozzolana Cement (PPC) This cement is widely used in masonry work. This cement is cheaper than OPC cement. The strength of this cement is equivalent to OPC 53 grade cement. Some of the advantages of using PPC are, 1. It is cheaper than the OPC. 2. It facilitates better workability. 3. The generation of heat of hydration is lower than OPC. 4. It also prohibits micro-cracks in the structure, which further increases the strength of the concrete structure. 5. The soluble calcium hydroxide is converted into insoluble cemented ingredients, thus making the structure impervious/impermeable to water. 6. The production of Calcium Hydroxide is not as much as it is in OPC. The low level of calcium hydroxide significantly provides all-around strength and durability to the concrete structure. 7. The setting time of concrete is prolonged, which helps the mason for good finishing of concrete or cement mortar. The cohesiveness of the concrete mix helps for better finishing of concrete. One thing to keep in mind when choosing between OPC and PPC is how many days you want to remove the formwork. Concrete mixes made from PPC take longer to set up than OPC. So choose accordingly. All Cement Rate List [monthyear] OPC 53 Grade Cement Price List Today is given in the table, Cement Brand Grade of Cement Price (Rs.) Ultratect Cement 53 Grade OPC Rs. 410 Ambuja Cement 53 Grade OPC Rs. 469 ACC Cement 53 Grade OPC Rs. 435 Birla Cement 53 Grade OPC Rs. 465 JK Lakshmi Cement 53 Grade OPC Rs. 340 Dalmia Cement 53 Grade OPC Rs. 415 Jaypee Cement 53 Grade OPC Rs. 440 Shree Cement 53 Grade OPC Rs. 435 Banger Cement 53 Grade OPC Rs. 460 Coromandel Cement 53 Grade OPC Rs. 415 Priya Cement 53 Grade OPC Rs. 495 Ramco Cement 53 Grade OPC Rs. 410 Sanghi Cement 53 Grade OPC Rs. 400 Hathi Cement 53 Grade OPC Rs. 492 All Cement Price List [monthyear] OPC 43 Grade Cement Price List Today is given in the table, Cement Brand Grade of Cement Price (Rs.) Ultratect Cement 43 Grade OPC Rs. 407 Ambuja Cement 43 Grade OPC Rs. 430 ACC Cement 43 Grade OPC Rs. 485 Birla Cement 43 Grade OPC Rs. 475 JK Lakshmi Cement 43 Grade OPC Rs. 490 Dalmia Cement 43 Grade OPC Rs. 450 Jaypee Cement 43 Grade OPC Rs. 460 Shree Cement 43 Grade OPC Rs. 465 Banger Cement 43 Grade OPC Rs. 465 Coromandel Cement 43 Grade OPC Rs. 415 Priya Cement 43 Grade OPC Rs. 450 Ramco Cement 43 Grade OPC Rs. 495 Sanghi Cement 43 Grade OPC Rs. 465 Hathi Cement 43 Grade OPC Rs. 485 Today Cement Price OPC 33 Grade Cement Price List [monthyear] is given in the table, Cement Brand Grade of Cement Price (Rs.) Ultratect Cement 33 Grade OPC Rs. 490 Ambuja Cement 33 Grade OPC Rs. 440 ACC Cement 33 Grade OPC Rs. 470 Birla Cement 33 Grade OPC Rs. 475 JK Lakshmi Cement 33 Grade OPC Rs. 490 Dalmia Cement 33 Grade OPC Rs. 445 Jaypee Cement 33 Grade OPC – Shree Cement 33 Grade OPC – Banger Cement 33 Grade OPC Rs. 450 Coromandel Cement 33 Grade OPC Rs. 450 Priya Cement 33 Grade OPC Rs. 465 Ramco Cement 33 Grade OPC Rs. 475 Sanghi Cement 33 Grade OPC Rs. 405 Hathi Cement 33 Grade OPC Rs. 465 Cement Price [monthyear] All cement price list [monthyear] 53 grade, PPC 53 Grade Cement Price List [monthyear] is given in the table, Cement Brand Grade of Cement Price (Rs.) Ultratect Cement 53 Grade PPC Rs. 465 Ambuja Cement 53 Grade PPC Rs. 468 ACC Cement 53 Grade PPC Rs. 400 Birla Cement 53 Grade PPC Rs. 440 JK Lakshmi Cement 53 Grade PPC Rs. 475 Dalmia Cement 53 Grade PPC Rs. 425 Jaypee Cement 53 Grade PPC Rs. 470 Shree Cement 53 Grade PPC Rs. 410 Banger Cement 53 Grade PPC Rs. 450 Coromandel Cement 53 Grade PPC Rs. 442 Priya Cement 53 Grade PPC Rs. 435 Ramco Cement 53 Grade PPC Rs. 470 Sanghi Cement 53 Grade PPC Rs. 420 Hathi Cement 53 Grade PPC Rs. 455 Cement Rate [monthyear] PPC 43 Grade Cement Price List [monthyear] is given in the table, all cement price list today 43 grade. Cement Brand Grade of Cement Price (Rs.) Ultratect Cement 43 Grade PPC Rs. 450 Ambuja Cement 43 Grade PPC Rs. 410 ACC Cement 43 Grade PPC Rs. 425 Birla Cement 43 Grade PPC Rs. 410 JK Lakshmi Cement 43 Grade PPC Rs. 496 Dalmia Cement 43 Grade PPC Rs. 427 Jaypee Cement 43 Grade PPC Rs. 420 Shree Cement 43 Grade PPC Rs. 460 Banger Cement 43 Grade PPC Rs. 405 Coromandel Cement 43 Grade PPC Rs. 450 Priya Cement 43 Grade PPC Rs. 465 Ramco Cement 43 Grade PPC Rs. 422 Sanghi Cement 43 Grade PPC Rs. 470 Hathi Cement 43 Grade PPC Rs. 496 All Cement Price List Today 2024 Rank Cement Company Rate 1 Dalmia Bharat Rs 310 / Bag 2 Mangalam Cement Rs 315 / Bag 3 NCL Industries Cement Rs 320 / Bag 4 KCP Cement Rs 325 / Bag 5 Shree Cement Rs 335 / Bag 6 Deccan Cement Rs 335 / Bag 7 Burnpur Cement Rs 335 / Bag 8 ACC Cement Rs 343 / Bag 9 JK Lakshmi Cement Rs 340 / Bag 10 Jaiprakash Asso Cement Rs 340 / Bag Cement Price per bag in India, All cement price list today in up. Frequently Asked Questions (FAQ) About Cement Prices and Usage in India What factors determine the price of cement in India? Cement prices in India are influenced by various factors such as demand-supply dynamics, production costs, transportation costs, government regulations, and taxes. What are the different grades of cement available in India? The most common grades of cement used in India are Ordinary Portland Cement (OPC) and Portland Pozzolana Cement (PPC). These are further classified into various strengths like 33 Grade, 43 Grade, and 53 Grade OPC, as well as 43 Grade and 53 Grade PPC. How does the choice of cement grade affect construction projects? The choice of cement grade depends on the specific requirements of the construction project. For instance, OPC 53 Grade cement is preferred for high-strength concrete applications such as in bridges and high-rise buildings, while PPC is suitable for general construction purposes. What are the advantages of using OPC cement? OPC cement offers advantages like faster initial setting time, higher resistance to cracking and shrinkage, and lower compliance-related burden. It is commonly used in projects where props need to be removed early and where curing costs need to be minimized. How does PPC cement differ from OPC? PPC cement is generally cheaper than OPC and offers benefits such as better workability, lower heat of hydration, and increased durability due to the formation of insoluble cemented ingredients. It is commonly used in masonry work and where longer setting times are preferred. What are some important considerations when choosing between OPC and PPC? When choosing between OPC and PPC, factors like setting time, curing requirements, and project specifications need to be taken into account. For example, PPC takes longer to set compared to OPC, so the formwork removal time should be considered accordingly. How does the price of cement impact construction costs? The price of cement significantly influences construction costs since it is a major component in various construction activities such as concrete work, masonry, plastering, and flooring. Fluctuations in cement prices can directly affect project budgets and profitability. What are the most commonly used cement brands in India? Some of the popular cement brands in India include Ultratect Cement, Ambuja Cement, ACC Cement, Birla Cement, JK Lakshmi Cement, Dalmia Cement, and Shree Cement, among others. How has the consumption of cement in India evolved over the years? The consumption volume of cement in India has shown steady growth over the years, reflecting the increasing demand for construction and infrastructure development in the country. Where can I find up-to-date information on cement prices and brands in India? Various sources provide up-to-date information on cement prices and brands in India, including industry reports, cement company websites, and online marketplaces specializing in construction materials. Additionally, local dealers and suppliers can offer current price lists and product availability. 43 Grade PPC Cement Rate [monthyear] PPC 43 Grade Cement Price List [monthyear] is given in the table, all cement price list today 43 grade. Cement Brand Grade of Cement Price (Rs.) Ultratect Cement 43 Grade PPC Rs. 450 Ambuja Cement 43 Grade PPC Rs. 410 ACC Cement 43 Grade PPC Rs. 425 Birla Cement 43 Grade PPC Rs. 410 JK Lakshmi Cement 43 Grade PPC Rs. 496 Dalmia Cement 43 Grade PPC Rs. 427 Jaypee Cement 43 Grade PPC Rs. 420 Shree Cement 43 Grade PPC Rs. 460 Banger Cement 43 Grade PPC Rs. 405 Coromandel Cement 43 Grade PPC Rs. 450 Priya Cement 43 Grade PPC Rs. 465 Ramco Cement 43 Grade PPC Rs. 422 Sanghi Cement 43 Grade PPC Rs. 470 Hathi Cement 43 Grade PPC Rs. 496 Cement Consumption In India The consumption volume of cement in India from the financial year 2009 to 2022, with estimates until 2022, All Cement Price List Today 2023 Cement Price per bag in India, All cement prices listed today in up. Rank Cement Company Rate 1 Dalmia Bharat Rs 310 / Bag 2 Mangalam Cement Rs 315 / Bag 3 NCL Industries Cement Rs 320 / Bag 4 KCP Cement Rs 325 / Bag 5 Shree Cement Rs 335 / Bag 6 Deccan Cement Rs 335 / Bag 7 Burnpur Cement Rs 335 / Bag 8 ACC Cement Rs 343 / Bag 9 JK Lakshmi Cement Rs 340 / Bag 10 Jaiprakash Asso Cement Rs 340 / Bag Introduction of All Cement Price List [monthyear] This article will tell you what is going on in the list of all cement prices [monthyear]. In this article we will also know today’s cement prices, today’s list of all cement prices is 53 grade, cement price list today is 43 grade and the price of each cement is in bags and also in Cement is a very useful material used in construction. Hundreds of years ago we used all other materials like lime and clay in construction and its strength was also very low. Cement has now taken its place. Which is easily available in the market. So to buy cement needs to know the price of cement. Let us now move on to the article. Cement is most commonly used in construction. From the foundation of the building to finishing work of building, cement is used. The cost of cement a very important role in the construction cost of a building. Cement has been used in many works such as concrete in foundation, mortar in brick masonry, mortar in plaster, bedding in flooring. But different types of cement are used in different places. And the price of each cement varies according to the quality of the cement. For example, the cement used in the foundation is OPC 53 grade, while the cement used in brick masonry can be PPC 43 grade. Some of the reasons for its popularity and universal acceptance of Cement are listed below: 1. Cement can be produced on a very large scale and can be handled on a controlled basis, packed, and transported over long distances. 2. Cement is 10 times stronger than clay and lime. 3. Cement is readily available in the market, so it can be easily combined with other locally available materials. 4. Cement can be kept stored at normal temperature, the duchess does not get damaged for a reasonably long team. 5. Cement is mixed with water they start gaining strength within 30 minutes, they reach maximum strength in 24 to 48 hours and they achieve 66.66% strength in 7 days and remaining strength in 28 6. When water is mixed with lime, it produces a lot of heat whereas in the case of cement when mixed with water it produces less heat of hydration compared to lime. 7. Cement can also resist compressive stress. 8. In the case of shear stress, it provides strength to the steel and a good bond to the steel to overcome the extra stress. Most Used Cement In India We have used 23 types of cement in which it has been used in different places. But we have used two types of cement communally. Which is as follows. 1. Ordinary Portland Cement (OPC) 2. Portland Pozzolana Cement (PPC) There are two types of cement. Usually, these two types are used in cement construction. 1. Ordinary Portland Cement (OPC) OPC cement is a commonly used cement. This cement is widely used in construction. This cement is the most popular choice. The advantages of using OPC cement are as below. 1. OPC have been given a number of exemptions and their fore have a lower compliance-related burden than a private limited company. 2. It has great resistance to cracking and shrinkage but has less resistance to chemical attacks. 3. The initial setting time of OPC is faster than PPC so it is recommended in projects where props are to be removed early. 4. The curing period of OPC is less than PPC and curing costs reduced. Hence recommended that curing are cost-prohibitive. Grade 43 and Grade 33 OPC cement is an old grade of cement used for residential construction and is now being replaced by OPC 53 grade cement. This OPC 53-grade cement is the best cement for 2. Portland Pozzolana Cement (PPC) This cement is widely used in masonry work. This cement is cheaper than OPC cement. The strength of this cement is equivalent to OPC 53 grade cement. Some of the advantages of using PPC are, 1. It is cheaper than the OPC. 2. It facilitates better workability. 3. The generation of heat of hydration is lower than OPC. 4. It also prohibits micro-cracks in the structure, which further increases the strength of the concrete structure. 5. The soluble calcium hydroxide is converted into insoluble cemented ingredients, thus making the structure impervious/impermeable to water. 6. The production of Calcium Hydroxide is not as much as it is in OPC. The low level of calcium hydroxide significantly provides all-around strength and durability to the concrete structure. 7. The setting time of concrete is prolonged, which helps the mason for good finishing of concrete or cement mortar. The cohesiveness of the concrete mix helps for better finishing of concrete. One thing to keep in mind when choosing between OPC and PPC is how many days you want to remove the formwork. Concrete mixes made from PPC take longer to set up than OPC. So choose accordingly. All Cement Rate List [monthyear] OPC 53 Grade Cement Price List Today is given in the table, Cement Brand Grade of Cement Price (Rs.) Ultratect Cement 53 Grade OPC Rs. 410 Ambuja Cement 53 Grade OPC Rs. 469 ACC Cement 53 Grade OPC Rs. 435 Birla Cement 53 Grade OPC Rs. 465 JK Lakshmi Cement 53 Grade OPC Rs. 340 Dalmia Cement 53 Grade OPC Rs. 415 Jaypee Cement 53 Grade OPC Rs. 440 Shree Cement 53 Grade OPC Rs. 435 Banger Cement 53 Grade OPC Rs. 460 Coromandel Cement 53 Grade OPC Rs. 415 Priya Cement 53 Grade OPC Rs. 495 Ramco Cement 53 Grade OPC Rs. 410 Sanghi Cement 53 Grade OPC Rs. 400 Hathi Cement 53 Grade OPC Rs. 492 All Cement Price List [monthyear] OPC 43 Grade Cement Price List Today is given in the table, Cement Brand Grade of Cement Price (Rs.) Ultratect Cement 43 Grade OPC Rs. 407 Ambuja Cement 43 Grade OPC Rs. 430 ACC Cement 43 Grade OPC Rs. 485 Birla Cement 43 Grade OPC Rs. 475 JK Lakshmi Cement 43 Grade OPC Rs. 490 Dalmia Cement 43 Grade OPC Rs. 450 Jaypee Cement 43 Grade OPC Rs. 460 Shree Cement 43 Grade OPC Rs. 465 Banger Cement 43 Grade OPC Rs. 465 Coromandel Cement 43 Grade OPC Rs. 415 Priya Cement 43 Grade OPC Rs. 450 Ramco Cement 43 Grade OPC Rs. 495 Sanghi Cement 43 Grade OPC Rs. 465 Hathi Cement 43 Grade OPC Rs. 485 Today Cement Price OPC 33 Grade Cement Price List [monthyear] is given in the table, Cement Brand Grade of Cement Price (Rs.) Ultratect Cement 33 Grade OPC Rs. 490 Ambuja Cement 33 Grade OPC Rs. 440 ACC Cement 33 Grade OPC Rs. 470 Birla Cement 33 Grade OPC Rs. 475 JK Lakshmi Cement 33 Grade OPC Rs. 490 Dalmia Cement 33 Grade OPC Rs. 445 Jaypee Cement 33 Grade OPC – Shree Cement 33 Grade OPC – Banger Cement 33 Grade OPC Rs. 450 Coromandel Cement 33 Grade OPC Rs. 450 Priya Cement 33 Grade OPC Rs. 465 Ramco Cement 33 Grade OPC Rs. 475 Sanghi Cement 33 Grade OPC Rs. 405 Hathi Cement 33 Grade OPC Rs. 465 Cement Price [monthyear] All cement price list [monthyear] 53 grade, PPC 53 Grade Cement Price List [monthyear] is given in the table, Cement Brand Grade of Cement Price (Rs.) Ultratect Cement 53 Grade PPC Rs. 465 Ambuja Cement 53 Grade PPC Rs. 468 ACC Cement 53 Grade PPC Rs. 400 Birla Cement 53 Grade PPC Rs. 440 JK Lakshmi Cement 53 Grade PPC Rs. 475 Dalmia Cement 53 Grade PPC Rs. 425 Jaypee Cement 53 Grade PPC Rs. 470 Shree Cement 53 Grade PPC Rs. 410 Banger Cement 53 Grade PPC Rs. 450 Coromandel Cement 53 Grade PPC Rs. 442 Priya Cement 53 Grade PPC Rs. 435 Ramco Cement 53 Grade PPC Rs. 470 Sanghi Cement 53 Grade PPC Rs. 420 Hathi Cement 53 Grade PPC Rs. 455 Cement Rate [monthyear] PPC 43 Grade Cement Price List [monthyear] is given in the table, all cement price list today 43 grade. Cement Brand Grade of Cement Price (Rs.) Ultratect Cement 43 Grade PPC Rs. 450 Ambuja Cement 43 Grade PPC Rs. 410 ACC Cement 43 Grade PPC Rs. 425 Birla Cement 43 Grade PPC Rs. 410 JK Lakshmi Cement 43 Grade PPC Rs. 496 Dalmia Cement 43 Grade PPC Rs. 427 Jaypee Cement 43 Grade PPC Rs. 420 Shree Cement 43 Grade PPC Rs. 460 Banger Cement 43 Grade PPC Rs. 405 Coromandel Cement 43 Grade PPC Rs. 450 Priya Cement 43 Grade PPC Rs. 465 Ramco Cement 43 Grade PPC Rs. 422 Sanghi Cement 43 Grade PPC Rs. 470 Hathi Cement 43 Grade PPC Rs. 496 All Cement Price List Today 2024 Rank Cement Company Rate 1 Dalmia Bharat Rs 310 / Bag 2 Mangalam Cement Rs 315 / Bag 3 NCL Industries Cement Rs 320 / Bag 4 KCP Cement Rs 325 / Bag 5 Shree Cement Rs 335 / Bag 6 Deccan Cement Rs 335 / Bag 7 Burnpur Cement Rs 335 / Bag 8 ACC Cement Rs 343 / Bag 9 JK Lakshmi Cement Rs 340 / Bag 10 Jaiprakash Asso Cement Rs 340 / Bag Cement Price per bag in India, All cement price list today in up. Frequently Asked Questions (FAQ) About Cement Prices and Usage in India What factors determine the price of cement in India? Cement prices in India are influenced by various factors such as demand-supply dynamics, production costs, transportation costs, government regulations, and taxes. What are the different grades of cement available in India? The most common grades of cement used in India are Ordinary Portland Cement (OPC) and Portland Pozzolana Cement (PPC). These are further classified into various strengths like 33 Grade, 43 Grade, and 53 Grade OPC, as well as 43 Grade and 53 Grade PPC. How does the choice of cement grade affect construction projects? The choice of cement grade depends on the specific requirements of the construction project. For instance, OPC 53 Grade cement is preferred for high-strength concrete applications such as in bridges and high-rise buildings, while PPC is suitable for general construction purposes. What are the advantages of using OPC cement? OPC cement offers advantages like faster initial setting time, higher resistance to cracking and shrinkage, and lower compliance-related burden. It is commonly used in projects where props need to be removed early and where curing costs need to be minimized. How does PPC cement differ from OPC? PPC cement is generally cheaper than OPC and offers benefits such as better workability, lower heat of hydration, and increased durability due to the formation of insoluble cemented ingredients. It is commonly used in masonry work and where longer setting times are preferred. What are some important considerations when choosing between OPC and PPC? When choosing between OPC and PPC, factors like setting time, curing requirements, and project specifications need to be taken into account. For example, PPC takes longer to set compared to OPC, so the formwork removal time should be considered accordingly. How does the price of cement impact construction costs? The price of cement significantly influences construction costs since it is a major component in various construction activities such as concrete work, masonry, plastering, and flooring. Fluctuations in cement prices can directly affect project budgets and profitability. What are the most commonly used cement brands in India? Some of the popular cement brands in India include Ultratect Cement, Ambuja Cement, ACC Cement, Birla Cement, JK Lakshmi Cement, Dalmia Cement, and Shree Cement, among others. How has the consumption of cement in India evolved over the years? The consumption volume of cement in India has shown steady growth over the years, reflecting the increasing demand for construction and infrastructure development in the country. Where can I find up-to-date information on cement prices and brands in India? Various sources provide up-to-date information on cement prices and brands in India, including industry reports, cement company websites, and online marketplaces specializing in construction materials. Additionally, local dealers and suppliers can offer current price lists and product availability. Leave a Comment
{"url":"https://civiljungle.org/all-cement-price-list-today/","timestamp":"2024-11-08T17:39:30Z","content_type":"text/html","content_length":"303635","record_id":"<urn:uuid:0501f2a4-43cb-4dd1-8d30-c02a9594f5d0>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00837.warc.gz"}
Importing Python packages into Sage or Vice Versa Importing Python packages into Sage or Vice Versa Hi all, this is my first post. I'm a long time user of Python, and I'm fairly new to Sage. I'm using Ubuntu, if this helps. Currently, python doesn't recognize any of Sage's packages, and Sage can't import even standard Python packages like pandas. I've looked at various instructions online about how to import sage packages into python and vice versa, but they all seem incredibly intricate, and none of them seem to work for me. Here are a few attempts that I made: 1. Running "sudo sagemath --python -m easy_install pandas" in terminal. (Still can't import pandas while running sage; even after running: "sys.path.append('/usr/lib/sagemath/local/lib/python2.7/ site-packages/pandas-0.19.1-py2.7-linux-x86_64.egg')" from within sage.) 2. Following the instructions in the "import sage packages in python" question. (I don't have enough karma to post links.) 3. Doing various bizzarre attempts at changing sage_root or adding to it my python path. 4. Attempting to import pip while running sage, and install pandas this way. (Can't import pip.) I tried a few more things (like downloading packages and "build"ing them, and then doing a few more very technical and annoying things), and all of my attempts fail. Is there really no simple way to do this? What is the most fool-proof way to do this that exists? It would be enormously helpful for me to be able to use python packages while working in sage, or vice versa. @Andrew, you should be able to post links now. 2 Answers Sort by » oldest newest most voted Run the sage shell by typing sage -sh in a terminal. In the sage shell, run pip install pandas. Next time you launch Sage, you can import pandas. $ sage │ SageMath version 7.4, Release Date: 2016-10-18 │ │ Type "notebook()" for the browser-based notebook interface. │ │ Type "help()" for help. │ sage: import pandas Traceback (most recent call last) ImportError: No module named pandas $ sage -sh Starting subshell with Sage environment variables set. Don't forget to exit when you are done. Beware: * Do not do anything with other copies of Sage on your system. * Do not use this for installing Sage packages using "sage -i" or for running "make" at Sage's root directory. These should be done outside the Sage shell. Bypassing shell configuration files... Note: SAGE_ROOT=/opt/s/sage-7.4 (sage-sh) you@YourComputer:~$ pip install pandas Collecting pandas Downloading pandas-0.19.1.tar.gz (8.4MB) 100% |████████████████████████████████| 8.4MB 116kB/s Requirement already satisfied (use --upgrade to upgrade): python-dateutil in /opt/s/sage-7.4/local/lib/python2.7/site-packages (from pandas) Requirement already satisfied (use --upgrade to upgrade): pytz>=2011k in /opt/s/sage-7.4/local/lib/python2.7/site-packages (from pandas) Requirement already satisfied (use --upgrade to upgrade): numpy>=1.7.0 in /opt/s/sage-7.4/local/lib/python2.7/site-packages (from pandas) Requirement already satisfied (use --upgrade to upgrade): six>=1.5 in /opt/s/sage-7.4/local/lib/python2.7/site-packages (from python-dateutil->pandas) Installing collected packages: pandas Running setup.py install for pandas ... done Successfully installed pandas-0.19.1 You are using pip version 8.1.2, however version 9.0.0 is available. You should consider upgrading via the 'pip install --upgrade pip' command. (sage-sh) you@YourComputer:~$ $ sage │ SageMath version 7.4, Release Date: 2016-10-18 │ │ Type "notebook()" for the browser-based notebook interface. │ │ Type "help()" for help. │ sage: import pandas edit flag offensive delete link more When i run pip install python-igraph it gives that pip install python_igraph pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available. Collecting python_igraph Could not fetch URL https://pypi.python.org/simple/python-igraph/: There was a problem confirming the ssl certificate: Can't connect to HTTPS URL because the SSL module is not available. - skipping Could not find a version that satisfies the requirement python_igraph (from versions: ) No matching distribution found for python_igraph Could you teach me how to install it edenharder ( 2016-11-07 13:03:35 +0100 )edit To display blocks of code in questions, answers or comments, select the lines of code, then click the "code" button (the icon with '101 010'). Alternatively, indent each line of code by four spaces. Surrounding code with triple-bacquotes does not work. Can you edit your comment to do that? slelievre ( 2016-11-07 13:53:17 +0100 )edit The error message is complaining that you have no SSL support. You should install libssl and libssl-dev using your distribution's package manager. Either type this in a terminal $ apt-get install libssl libssl-dev or use the graphical user interface for Ubuntu's package manager. slelievre ( 2016-11-07 14:00:08 +0100 )edit This does not work for the same reason edenharder said. I am using Sage on macOS, and Sage is supposed to be fully self-contained. It should contain all libraries needed. The problem is not that the OS is missing them. Szabolcs ( 2019-03-01 14:09:23 +0100 )edit On macOS, you will need to install Apple developer tools by running the following in a terminal: xcode-select --install and then to install OpenSSL and recompile Sage's Python as follows (still in a terminal): sage -i openssl sage -f python2 python3 Then you can use pip: • either do sage -sh then pip install ... • or directly sage --pip install ... slelievre ( 2019-03-01 17:21:21 +0100 )edit I usually install new python packages in Sage via ./sage --python -m easy_install <package_name> (no sudo needed). Have you tried this? (if it is successful, any new sage notebook will recognise import <package_name>). Also, which version of sage are you using? edit flag offensive delete link more
{"url":"https://ask.sagemath.org/question/35457/importing-python-packages-into-sage-or-vice-versa/?answer=35459","timestamp":"2024-11-05T22:29:25Z","content_type":"application/xhtml+xml","content_length":"75844","record_id":"<urn:uuid:df06cc29-c76c-468b-8285-d2ba01067dfd>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00127.warc.gz"}
Compare data in two Google sheets or columns for matches and differences Whether there's summer knocking on our doors or winter invading Westeros, we still work in Google Sheets and have to compare different pieces of tables with one another. In this article, I'm sharing ways of matching your data and giving away tips on doing that swiftly. Compare two columns or sheets One of the tasks you may have is to scan two columns or sheets for matches or differences and identify them somewhere outside the tables. Compare two columns in Google Sheets for matches and differences I'll start by comparing two cells in Google Sheets. This way lets you scan entire columns row by row. Example 1. Google Sheets – compare two cells For this first example, you will need a helper column in order to enter the formula into the first row of the data to compare: If cells match, you'll see TRUE, otherwise FALSE. To check all cells in a column, copy the formula down to other rows: Tip. To compare columns from different files, you need to use the IMPORTRANGE function: Example 2. Google Sheets – compare two lists for matches and differences • A neater solution would be to use the IF function. You'll be able to set the exact status for identical and different cells: Tip. If your data is written in different cases and you'd like to consider such words as different, here's the formula for you: Where EXACT considers the case and looks for the complete identicals. • To identify only rows with duplicate cells, use this formula: • To mark only rows with unique records between cells in two columns, take this one: Example 3. Compare two columns in Google Sheets • There's a way to avoid copying the formula over each row. You can forge an array IF formula in the first cell of your helper column: This IF pairs each cell of column A with the same row in column C. If records are different, the row will be identified accordingly. What is nice about this array formula is that it automatically marks each and every row at once: • In case you'd rather name the rows with identical cells, fill the second argument of the formula instead of the third one: Example 4. Compare two Google Sheets for differences Oftentimes you need to compare two columns in Google Sheets that belong inside a huge table. Or they can be entirely different sheets like reports, price lists, working shifts per month, etc. Then, I believe, you can't afford to create a helper column or it can be quite difficult to manage. If this sounds familiar, don't worry, you can still mark the differences on another sheet. Here are two tables with products and their prices. I want to locate all cells with different contents between these tables: Start with creating a new sheet and enter the next formula into A1: =IF(Sheet1!A1<>Sheet2!A1,Sheet1!A1&" | "&Sheet2!A1,"") Note. You must copy the formula over the range equal to the size of the biggest table. As a result, you will see only those cells that differ in contents. The formula will also pull records from both tables and separate them with a character you enter into the formula: Tip. If the sheets to compare are in different files, again, just incorporate the IMPORTRANGE function: =IF(Sheet1!A1<>IMPORTRANGE("2nd_spreadsheet_url","Sheet1!A1"),Sheet1!A1&" | "&IMPORTRANGE("2nd_spreadsheet_url","Sheet1!A1"),"") Tools for Google Sheets to compare two columns and sheets Of course, each of the above examples can be used to compare two columns from one or two tables or even match sheets. However, there are a few tools we created for this task that will benefit you a Compare sheets add-on This first one will compare two (& more!) Google sheets and columns for duplicates or uniques in 5 steps. Make it mark the found records with a status column (that can be filtered, by the way) or color, copy or move them to another location, or even clear cells and delete entire rows with dupes whatsoever. I used the add-on to find the rows from Sheet1 that are absent from Sheet2 (and vice versa) based on Fruit and MSRP columns: Then I saved my settings into one scenario. Now I can quickly run them without going through all steps again whenever records in my tables change. I just need to start that scenario from the Google Sheets menu: If you're feeling excited about this tool, go ahead and click that image below to install it from the Google Workspace Marketplace. You'll notice how much time it saves you :) This help page will gently guide you in case you're stuck on any step. Compare sheets cell by cell This on is also part of the Compare Sheets collection. It will compare your Google Sheets for differences. Whether you have two or more tables, it will check them all cell by cell and create one thorough report with differences from all sheets grouped accordingly. Here's an example of the same two tables. The add-on creates one report with not only different cells (marked with yellow) but also unique rows (marked with red and blue): Video: How to work with the comparison report To look at the report and all its parts closely, feel free to read this tutorial or watch this demo video: Try both add-ons for yourself and notice how much time they save you. :) Compare data in two Google Sheets and fetch missing records Comparing two Google Sheets for differences and repeats is half the work, but what about missing data? There are special functions for this as well, for example, VLOOKUP. Let's see what you can do. Find missing data Example 1 Imagine you have two lists of products (columns A and C in my case, but they can simply be on different sheets). You need to find those presented in the first list but not in the second one. This formula will do the trick: How does the formula work: • VLOOKUP searches for the product from A2 in the second list. If it's there, the function returns the product name. Or else you will get an #N/A error meaning the value wasn't found in column C. • ISERROR checks what VLOOKUP returns and shows you TRUE if it's the value and FALSE if it's the error. Thus, cells with FALSE are what you're looking for. Copy the formula to other cells to check each product from the first list: Note. If your columns are in different sheets, your formula will reference one of them: Tip. To get by with a one-cell formula, it should be an array one. Such formula will automatically fill all cells with results: Example 2 Another smart way would be to count all appearances of the product from A2 in column C: =IF(COUNTIF($C:$C, $A2)=0, "Not found", "") If there's absolutely nothing to count, the IF function will mark cells with Not found. Other cells will remain empty: Example 3 Where there's VLOOKUP, there's MATCH. You know that, right? ;) Here's the formula to match products rather than count: =IF(ISERROR(MATCH($A2,$C:$C,0)),"Not found","") Tip. Feel free to specify the exact range of the second column if it remains the same: =IF(ISERROR(MATCH($A2,$C2:$C28,0)),"Not found","") Pull matching data Example 1 Your task may be a bit fancier: you may need to pull all missing information for the records common for both tables, for example, update prices. If so, you'll need to wrap MATCH in INDEX: The formula compares fruits in column A with fruits in column D. For everything found, it pulls the prices from column E to column B. Example 2 As you may have guessed, another example would use the Google Sheets VLOOKUP function that we described some time ago. Yet, there are a few more instruments for the job. We described them all in our blog as well: 1. These will do for the basics: lookup, match and update records. 2. These will not just update cells but add related columns & non-matching rows. Merge sheets using the add-on If you're tired of formulas, you can use our Merge Sheets add-on to quickly match and merge two Google sheets. Alongside its basic purpose to pull the missing data, it can also update existing values and even add non-matching rows. You can see all changes in colour or in a status column that can be filtered. The 2.0 version of Merge Sheets will merge not just 2 tables (one main with one lookup) but multiple sheets in a row (one main with several lookups). The data from the lookup sheets will be added to your main one by one: as you added them in the add-on. Lots of additional options will make your merge as comprehensive as you need. Video: How to use Merge Sheets add-on for Google Sheets Check out this video about the Merge Sheets add-on. Though it features just 2 sheets, it paints a clear picture of the add-on possibilities: Conditional formatting to compare data in two Google Sheets There's one more standard way Google offers to compare your data – by colouring matches and/or differences via conditional formatting. This method makes all records you're looking for stand out instantly. Your job here is to create a rule with a formula and apply it to the correct data range. Highlight duplicates in two sheets or columns Let's compare two columns in Google Sheets for matches and colour only those cells in column A that tally with cells in the same row in column C: 1. Select the range with records to color (A2:A10 for me). 2. Go to Format > Conditional formatting in the spreadsheet menu. 3. Enter a simple formula to the rule: 4. Pick the color to highlight cells. Tip. If your columns change in size constantly and you want the rule to consider all new entries, apply it to the entire column (A2:A, assuming the data to compare starts from A2) and modify the formula like this: This will process entire columns and ignore empty cells. Note. To compare data from two different sheets, you'll have to make other adjustments to the formula. You see, conditional formatting in Google Sheets doesn't support cross-sheet references. However, you can access other sheets indirectly: In this case, please specify the range to apply the rule to – A2:A10. Compare two Google sheets and columns for differences To highlight records that don't match cells on the same row in another column, the drill is the same as above. You select the range and create a conditional formatting rule. However, the formula here Again, modify the formula to make the rule dynamic (have it consider all newly added values in these columns): And use the indirect reference to another sheet if the column to compare with is there: Note. Don't forget to specify the range to apply the rule to – A2:A10. Compare two lists and highlight records in both of them Of course, it's more likely the same records in your columns will be scattered. The value in A2 in one column will not necessarily be on the second row of another column. In fact, it may appear much later. Clearly, this requires another method of searching for the items. Example 1. Compare two columns in Google Sheets and highlight differences (uniques) To highlight unique values in each list, you must create two conditional formatting rules for each column. Color column A: =COUNTIF($C$2:$C$9,$A2)=0 Color column C: =COUNTIF($A$2:$A$10,$C2)=0 Here are the uniques I've got: Example 2. Find and highlight duplicates in two columns in Google Sheets You can colour common values after slight modifications in both formulas from the previous example. Just make the formula count everything greater than zero. Color dupes between columns in A only: =COUNTIF($C$2:$C$9,$A2)>0 Color dupes between columns in C only: =COUNTIF($A$2:$A$10,$C2)>0 Tip. Find many more formula examples to highlight duplicates in Google Sheets in this tutorial. 3 quickest ways to match columns and highlight records Conditional formatting can be tricky sometimes: you may accidentally create a few rules over the same range or apply colors manually over cells with rules. Also, you have to keep an eye on all ranges: the ones you highlight via rules and those you use in the rules themselves. All of these may confuse you a lot if you're not prepared and not sure where to look for the problem. Luckily, our Compare Sheets collection for Google Sheets has 3 user-friendly solutions for you. Video: What is Compare Sheets collection Add-on to compare & highlight duplicates or uniques Compare sheets for duplicates is intuitive enough to help you match different tables within one file or two separate files, and highlight those uniques or dupes that may sneak into your data. Here's how I highlighted duplicates between just two tables based on Fruit and MSRP columns using the tool: I can also save these settings into a reusable scenario. If the records update, I will call for this scenario in just a click and the add-on will immediately start processing all the data. Thus, I avoid tweaking all those settings over the add-on steps repeatedly. You will see how scenarios work in the example above and in this tutorial. Add-on to compare Google sheets and highlight differences Compare sheets cell by cell doesn't fall behind. It sees all differences between two columns or sheets. In fact, it compares as many sheets as you need, even from different files. Usually, one of these tables acts as your main one, and you compare it with others. The add-on highlights differences on those other sheets so you could spot them instantly: Video: How to use Compare Sheets Cell by Cell add-on This help page and the demo video below will give you a better idea of how it compares multiple Google sheets for differences: Compare two columns and color dupes/uniques This last tool comes in especially helpful for a simpler task: comparing just two columns within one Google tab. Why especially helpful? Because since both columns are on one sheet, going over 5 steps is too much. Hence, there's only one step with all the necessary settings: This tutorial discusses every option in detail if you'd like to take a look. And guess what? This tool is also part of the Compare Sheets collection from the Google Workspace Marketplace. That's right: you get all 3 tools with just one add-on. Give it a go and you won't regret it! (And if you do, let me know why in the comments section!) Anyways, all these methods are now at your disposal – experiment with them, modify and apply them to your data. If none of the suggestions help your particular task, feel free to discuss your case in the comments down below. You may also be interested in 215 comments 1. Hi! I have data from 3 different databases, now in one spreadsheet, one database in each column A, B, C(C is the largets), and I need to erase from column C, all of the cells repited from columns A and B. Could you help me? □ Hi Mila, Formulas won't erase the values. For that, consider using our Compare Two Columns tool and compare column C with column A then with column B. 2. HI, I have two sheets that I am trying to match first and last names. Sometimes the first name (and rarely, the last name), may be misspelled but want them match. For example master sheet has Andy Jones, but other sheet says Andrew Jones, which is the same person I have the following formula that only works when first and last names are perfectly matched.. But I want to capture slight variations I have tried to add the "*A1*" wildcards, but I get errors. Any help is appreciated! □ Hi! To use wildcards in the COUNTIF and COUNTIFS functions, add "*" using the & operator. For example, "*"&A1&"*" ☆ Thanks for this. How much variation does the wildcard give you? For instance if I have Richard Jones on the main sheet and I have Rich Jones on the second sheet, the wildcards do not flag that. Any suggestions? ○ Hi! I recommend paying attention to the Find Fuzzy Matches for Google Sheets tool. The tool compares words for similarities and differences and creates a list for you of all fuzzy duplicates grouped by entry. 3. looking to match a column of customer number with second sheet or added columns to the first sheet containing customer numbers and names- so I can see who needs to be contacted. □ Hi! If I understand your task correctly, try to use VLOOKUP formula. Read more: VLOOKUP in Google Sheets with formula examples. 4. I am trying to identify changes in sheets I receive each month. Important data is customer id #, names, and a Yes/No column, plus there are a few columns I am not concerned about. I want to see when someone is no longer listed on the newer sheet (this is the one I am having the most trouble with), when there is a new person addition, and when someone has changed from yes to no. I'm unable to come up with a concise way to do all 3. Any advice would be greatly appreciated. □ Hi! To compare Excel spreadsheets for differences, I'd recommend you to take a look at our Compare Sheets tool. It is available as a part of our Ultimate Suite for Excel that you can install in a trial mode and check how it works for free. 5. I have 2 spreadsheets with part numbers in column A. Each sheet is for a different company. Column B has the specific pricing for that company. Is there a way to combine these sheets so that I can see a column with the pricing for the first company and a column right next to it for 2nd company for each part number? □ Hello Crystal, There are a few ways, I described them in this part of the article, please take a look. 6. hi i have 2 columns filled with numbers. Lets call them Column A and Column B. Column A represents my target and column B my current position, i would like to apply a colour scheme of red to column B if the numbers don't match the target and green if it does. A B 1 0 (make red) 3 3 (make green) 7 6 (make red) thank you □ Hi Justin, This solution will help you. 7. Hi, first of all thanks for your amazing work. I hope you can help me. How can I reconcile 2 Excel sheets to make sure the data matches up? For example, two months of payroll. How can I have Excel compare both sheets and highlight where they differ (name/amount)? How do I set it up so I know what the issue is? Ex. Red for extra names, yellow for contradicting numbers... Thank you! □ Hi Izzy, Thank you for your feedback! For comparing files in Excel, please visit this article. ☆ thank you so much 8. Hi, I have a sheet with columns: A - list of names in no particular order B - list of names in no particular order I want in a third column: C - list of names that are in both column A and B (matching names, but since they won't necessarily be on matching rows, it needs to check for matches in the whole columns and just add in column C the names that do appear anywhere in both columns A and B). Is there a formula for this that I could simply put in column C? Or how would I go about producing a list in a separate column with only the matching names from A and B? Thanks so much for any help! □ Hi JN, Try this formula: =UNIQUE(FILTER(A:A, COUNTIF(B:B, A:A))) Learn more about COUNTIF, FILTER & UNIQUE ☆ What if I want the values that don't match in column C? ○ Try this one then: =FILTER(A:A, ISNA(MATCH(A:A, B:B, 0)),A:A<>"") 9. Hello, I have browsed through multiple of your tutorials and streamlined many of my worksheets using the techniques you have published. There is still some issues where I need assistance. I own a business where we deliver goods to 150 consumers a day and have to track their online payments. Each consumer has a unique consumer id and the buying process involves the consumer booking and paying for the goods online and the goods are delivered to them in the next 1-2 weeks. Now we have a team of delivery personnel who deliver the goods to the consumers and bring back the consumer ids who have completed the payment. We verify the payments by extracting the consumer data from our company's online portal which contains the transaction details of all the consumers in the form of an elaborate list. We call it the portal list. We do the cross verifying on a monthly basis to avoid any discrepancy in payments from the Delivery team and the actual amount of the goods sold. The entries for the month are 4000+. 1. So now to verify we create two lists. One with all the consumer numbers obtained from the delivery boys (Lets call this list 'DList' ) and the second is the list of consumer ids we get from the portal (Lets call this 'PList' ). I use the following formula to compare the consumer id provided by the DBoy and compare it with the column in the Portal list which contains the Consumer ids we got from the portal. =IF(ISNA(VLOOKUP(D2, PList!$E$2:$E$5000, 1, FALSE))," NOT RECIEVED", "RECIEVED") I paste this formula in the DList in which the D column holds the consumer numbers provided by the DBoy. Then I get the list of consumers who's payment has been received and who's is not (The discrepancy over here is that the DBoys might provide a fake number or a wrong one and there is a possibility of theft.) But this formula fails if the consumer has taken the goods more than once in a month because the formula only compares and returns the value of 'RECIEVED' even if the consumer id matches the PList just once. So to count the number of matches I use the following formula. This shows me if the consumer id provided by the Dboy has 1 matches or >1 and then we can verify the consumers who have got the goods more than once in a month by finding the consumer id in the PList manually (Using Ctrl + F and then pasting the respective consumer id in the Find Popup) and verifying if the consumer id provided by the DBoy is legit. But this process takes a lot of time and effort for us to complete. So I wanted a formula which compares the Consumer id in the Dlist with the Column of Consumer id in the PList and then returns with the values which match and the values which don't match. I also need the formula to highlight the Row in Plist which shows more than one matches with the Consumer IDs in the DList and if possible create a separate table with those highlighted rows in a different table in the same worksheet. □ Hello, I'm glad to hear our articles help you with your spreadsheets! Now, thanks to your thorough and detailed description I understand the whole workflow and the formulas you're using. What I'm having difficulties to get is the exact result you'd like to see. I'm not sure but it looks like you're trying to compare the IDs row by row on these sheets, as described in this part of the article? If not, please create a sample sheet with about 20 rows of user IDs samples in each list (don't have to be the real ones) & with the example of the result you'd like to get from these lists. And share this sample with us: support@apps4gs.com. I'll look into it and try to help. Note. We keep that Google account for file sharing only and don't monitor its Inbox. Please do not email there. Once you share the file, just confirm by replying to this comment. As for highlighting, I believe the COUNTIF formulas like here in the Example 2 will help you out. You just need to use the '>1' condition instead of '>0' 10. Date Open Feb-02-2007 273.85 Feb-05-2007 324.65 Feb-06-2007 307.1 Feb-07-2007 347.86 Feb-08-2007 381.42 Feb-09-2007 355.36 Feb-12-2007 346.59 Feb-13-2007 302.72 Feb-14-2007 302.72 Feb-15-2007 319.08 Feb-19-2007 368.35 Feb-20-2007 356.85 Feb-21-2007 347.9 I want to fill missing dates with zero value example, how to do it, please guide me. Date Open Feb-02-2007 273.85 Feb-03-2007 0 Feb-04-2007 0 Feb-05-2007 324.65 Feb-06-2007 307.1 Thanks, Rishi □ Hello Rishi, I'm afraid there's not formula that would add rows inside your table and fill them with the required dates and values. You may try to find a solution here – an overview of Google Apps Script with a lot of helpful content and links: 11. Is it possible to highlight duplicates over several tabs in excel and google sheets? I have a tab/sheet per day and I need a range of 3 columns to indicate if there are any duplicate entries. □ Hello Tina, If you work in Google Sheets, the first two ways that describe highlighting duplicates are fit for comparing more than 2 sheets at a time. The third way with conditional formatting will require several rules where you compare 2 sheets a time. We are also working on the add-on that will compare multiple tables at once. 12. Hello, Thank you for your blog! I'm looking for the formula for INDEX MATCH MATCH in Google sheets. The 'Pull matching data' but with two criteria □ Hello Louis, If I understand you correctly, you're looking for this solution. 13. Hi! I hope you are able to help me. I need to compare 2 columns if both have values or only 1 column has, how do I do it? Thanks in advance! You are really helping a lot of us! :) □ Hi April, Could you please describe the task in more detail? How do you want to see the results? Also, did you try any of the ways described in this article? 14. Hi I have three sheets of attendees for an event over the years (2020, 2021, and 2022). I want to find anyone who has attended ALL THREE years, so there would be a common value on each sheet. Email is the easiest value to compare. I can't find an applicable formula. I'd like to keep the data in Google sheets since another person needs access to it. Any suggestions? □ Hi Tery, Are the names in different tabs within the same file or in different files? Would you like to color the matching names in each sheet or pulled to other table? ☆ Hi natialia hope that you can help me with the same about emails i just need to highlight the people attending to an event from a registration link to list of participants, ○ Hi andrea, Have you tried methods described in this part of the article? 15. Hey there, I have an accounting workbook with a sheet for Expenses, with a column for Descriptions with details of the charges (i.e. "Payment to Amazon", etc...). I also have a Provider sheet that has my providers' names, details, etc... and includes a "Query" column with possible values to match against the Descriptions column in the Expenses sheet (i.e. I want to automatically compare the values from the Expenses/Descriptions column with the Providers/Query column and if there is a match return the index of the row from Providers so that I can populate other fields in the Expense line with a VLOOKUP. The Query values might not be exact. They just have to match part of the string. I've searched around a bit and I can find pieces of the puzzle but I'm not sure the best way to put it all together. Is this possible? Any advice is greatly appreciated! □ Hey Craig, To look for partial matches, you need to use wildcard characters. Svetlana described them in this blog post, please have a look. 16. How do you compare 2 columns on 2 different tabs in the same Google Sheet for differences? □ Hello George, Feel free to use any of the ways described above: using formulas and a status column, using the add-on, or using colors. 17. Hi, Ablebits!!! Your wotk arr awesome. I was looking gor specific work to be carried using query formula. And my search was fullfiled after reading this page.. Thanks a lot.. Now i have learnet too further how to reduce the pain while handling multiple wotksheets.. Thank you team.. All the best, May God bless you □ Hi Jaisankar, Appreciate your feedback! Glad the article has helped! :) 18. Is it possible to distribute a column based on the text in another column? Column has A: 50 blue 70 orange 100 red Column B is longer: 50 tom 60 mike 70 betty 90 jace 100 natalia 110 glen Desired Column c: 50 blue 50 tom 60 mike 70 orange 70 betty 90 jace 100 red 100 natalia 110 glen I'm trying to distribute the info in column A to line up with Column B based on the unique of the number they both have... It's hard to explain. □ Hello Eric, I'd solve this task the following way: 19. Hi, I'm curious if you could help me with the issue I'm running across. We provide services to young adults and have a list of the the services we've provided to each young adult (some young adults have had 10+ services this program year so they have 10+ rows in our sheet). We're being asked to provide the number of unique young adults served per quarter who did not receive services in the previous quarters of that same program year. Our list includes names and dates of services. I can't figure out how to get the counts we need from our complete list so I split the names into 3 columns, each column containing the list of participants served each quarter (e.g. column 1 has Q1 participants, column 2 has Q2 participants, etc.) I then used the conditional formatting formula to highlight the participants served in Q2 who were not served in Q1 so that I could count them manually. The issue I'm running into is then figuring out the number of participants served in Q3 who were not served in Q1 and Q2. Our staff has always done this manually and I'm hoping we can use formulas to get these counts instead. Here is a link to a sample spreadsheet: https://docs.google.com/spreadsheets/d/1qTLtGoPdfymhGZkHzuKy97VLwXXPpZTBJapsFV1IXcY/edit?usp=sharing □ Hi Natalie, Thank you for sharing the file right away. I duplicated the Students by quarter sheet and put the results there, please take a look. 1. Based on your manual student sorting, I entered the dates for quarters (A1:C4). 2. I filter students based on those dates in corresponding columns (A8:A12, B8:B16, C8:C16). Each column is filled with unique students only. 3. Then I use conditional formatting to highlight duplicate students between the columns. For column 3, I used two rules since there are 2 columns to compare with. 4. Then I used our Function by Color to count only non-highlighed cells in each column (unique students: G10, G11). Please install the add-on at least in a trial mode to see the result and decide if you need the add-on for future use. ☆ Natalia. Thank you so much for this. Those formulas to pull over the de-duplicated data are incredibly helpful. With that and the conditional formatting rules, we'll be able to get those final numbers much quicker. Thank you so much for your help! ○ I'm happy to help, Natalie! :) 20. Hi! I am looking for a formula to find the sum or difference on separate google sheets (each sheet is a monthly report). Each sheet has names and a number value. Trying to identify if the number values increased or decreased from month 1 to month 2 for each name. The names are not always the same and don't always match up by cell each month, complicating the issue for me. A2:A192 are the names C2:C192 are the number values □ Hi Kel, For me to be able to help you better, please share a small sample spreadsheet with us (support@apps4gs.com) with 2 sheets: (1) a copy of your source data (2) the result you expect to get (the result sheet is of great importance and often gives us a better understanding than any text description). I kindly ask you to shorten the tables to 10-20 rows. Note. We keep that Google account for file sharing only and don't monitor its Inbox. Please do not email there. Once you share the file, just confirm by replying to this comment. I'll look into your task. ☆ Thank you! I've shared a "Sample Sheet" with the above contact, it consists of the 2 google sheets and the expected result sheet. I also have 2 sheets with pivot tables based on the 2 sample sheets, as not sure it it would be easier to solve from the pivot or source data sheets. Hope this is all the information you need, if not, please let me know, thanks so much! ○ I've got the file, Kel, thank you. Based on your result example, I've created a couple of formulas that do the same. You will find them in the Expected Result sheet, cells J4 & K4. I used SUMIFS and UNIQUE functions. Hope this is what you need! ■ Thank you Natalia, this looks like it should work great! ★ You're most welcome, Kel! Glad I could help :) ◎ hello Natalia, i had share you the sheet too, cn i've ur kind assistance too? i also have the same issue. just shared you the google sheet! Trying to identify if the number values increased or decreased from month 1 to month 2 for each name. The names are not always the same and don't always match up by cell each month, complicating the issue for me. ◎ Hello Zhi, Thank you for sharing the spreadsheet right away. I've looked into it and created the formulas to solve your task on the last sheet - Copy of Sheet1 - Ablebits. You will see the formulas in cells J2 & K2. I used UNIQUE, ARRAYFORMULA & VLOOKUP functions to solve the task. Before building the formulas, I also used our Combine Duplicate Rows to total numbers for any duplicates your monthly tables may contain. 21. Hi Natalia! Thanks so much for these, please can you help me? I have two Google sheets of names. I need to look one list of names up against the other to see if any have left our business. I need to highlight the names in sheet one, that aren't in sheet two, so I can call them. Which formula would be best for me please? Thanks so much for your help in advance! □ Hi Sarah, This part of the blog post provides formulas for when the columns are on 2 different sheets and on the same sheet, please have a look. 22. Hi, I'm hoping you can help me, I'm not sure what function to use (or if it's even possible). I have a long list of ingredients condensed in one cell separated by commas and I need to verify it against a list of other ingredients (individually listed in a column). How can I go about this ? □ Hi Ariane, Try this formula: 23. Hi, I need to change the colour of the name in a list on sheet one to orange if it appears in list 2 (Sheet 2) and Blue if it appears on list 3 (also on sheet 2). What is the best way to do this? I also have multiple lists on sheet 1 that will need to do the same, still based on the content of sheet 2. □ Hi Tom, I'd suggest looking at 'Example 2. Find and highlight duplicates in two columns in Google Sheets' in this section of the blog post or using a special tool described here to speed up the 24. Thank you so much! 25. Hi Mam, I have a sheet of my school lab testing reports. There are many samples in that sheet with their properties like Density mass and volume. These samples have their standard properties . for eg. Name Density Volume Mass Sample A 5 24 9 Sample B 6 35 10 and the standard range of these samples are Sample A Density- 4 Volume- 25 Mass- 9 Sample B Density- 6 Volume- 37 Mass- 9 So i want to add a formatting based upon the product if Sample b has greater values above its standard it will be automatically highlighted . so after that formatting according to me in sample A 5 will be highlighted (because its above standard) and in sample B Volume 36 and mass 9 will be highlighted. I think you understood. □ Hi Deepak, If I understand you correctly, you need to create a conditional formatting rule that will check if values in your sample sheet are greater than particular records on a sheet with the standard numbers. A similar case with the examples of the rules is covered in this blog post. ☆ sorry to say but my case is different...... I have 4 columns like Name Density Volume Mass all sample have their different results and but I want to format if the the sample properties values highlights if the properties are more with standard. Now tell me how to add formatting....means first i want to add standard value for the sample names and highlights if they are above limit with respect to standard ○ For me to be able to help you better, please share a small sample spreadsheet with us (support@apps4gs.com) with 2 sheets: 1. a copy of your source data 2. the result you expect to get (the result sheet is of great importance and often gives us a better understanding than any text description) I kindly ask you to shorten the tables to 10-20 rows. If you have confidential information there, you can replace it with some irrelevant data, just keep the format. Note. We keep that Google account for file sharing only and don't monitor its Inbox. Please do not email there. Once you share the file, just confirm by replying to this comment. I'll look into it and try to help. ■ shared just now please check ★ I've looked into your file and solved the task with conditional formatting. I used a custom formula with a VLOOKUP function as a rule: ◎ Dear Mam, Its not working, whenever i change the value in standard increase or decrease there is no automatically change...also whenever i do the same in the results of product tab there also it does not work ◎ Dear Mam, I am thankful for the response but its working only for the Mass row...and also whenever the standard changing its not getting changed automatically ◎ I'm sorry, I forgot to mention that I applied the rule to the 'Mass' column only. You just needed to create the same rule for 'Density' and 'Volume' columns. I've just copied the formula for you into those columns as well, please take a look. ◎ Thank you for helping. will you please tell me what is 2,0 here? Also please tell me the meaning of each function here. ◎ You're most welcome, Deepak. You will find the detailed description of the VLOOKUP function and its arguments in this blog post: Google Sheets VLOOKUP with examples As for INDIRECT, it's used to reference other sheets in conditional formatting. It is explained here: Google Sheets and conditional formatting based on another cell text 26. What an amazing article! But unfortunately I just can't get your instructions to Highlight duplicates in two sheets to work. I have two sheets, one called USD and the second one CNA. On column A of each of them I have a set of dates. I'm trying to have the dates that are duplicates when comparing both sheets to be highlighted on the USD sheet. For that I did the following: -Selected column A on the sheet called USD -Conditioning format ranged from A2:A999 - Custom formula =A2=INDIRECT("CNA!A2:A") Unfortunately it does not work, non of duplicated dates across sheet USD and CNA is highlighted. What am I missing here? Thank you for you time! □ I think I got it, conditioning format on sheet USD and the formula bellow did the trick. Thank you again for the article. ☆ Thank you for your feedback, Luiz! The way with the INDIRECT that didn't work looks for duplicate records on the same row. If your task is to check for duplicates regardless of their position in a column, you did the right choice with the COUNTIF :) ○ How would you get that formula to ignore empty cells. I am using it for conditional formatting, it works, it highlights what I want but also the empty cells. ■ Hello Sam, For me to be able to help you better, please consider sharing an editable copy of your spreadsheet and a rule with us: support@apps4gs.com Note. We keep that Google account for file sharing only and don't monitor its Inbox. Please do not email there. Once you share the file, just confirm by replying to this comment. I'll do my best to help. 27. Just wanted to say thank you so much for this article! It really helped me with a problem at work ? □ Thank you for your feedback, Jasmine! Glad to know the info is helpful :) 28. Actually I need your help. I am trying to fetch data from one sheet to another by comparing name on both the Sheets with the name. 1. I want to get the row where the name "Syed Faizan Ali" and also the name is case insensitive . there is no matter the name is exist in wherever column. 2. Then after fetching the row where "syed faizan ali" case insensitive exist. I want to compare the details from my another sheet with this sheet. And get the matched data and also want to get unmatched data. please help. I tried many formulas. But I failed. □ Hello Amin, I've got your spreadsheets and will look into them as soon as possible. Thank you for understanding. □ Hello Amin, If I understand your task correctly, for its first part you can use one of the methods described here: Pull matching data. However, since you have lots of small tables on each sheet and the name appears in a different column in each table, you will have to create separate formulas for each table because you need to tell the function where to look for that name. The same goes for comparing. You can use the ways described here: Compare two columns or sheets. But again, you will have to compare each pair of tables separately since all tables have different structures. 29. How to Highlight Duplicates from Two or Three Sheets? □ Hello Niyas, Just mention the required sheets in the formula you're using to compare & highlight cells. 30. Hi, I'd like a formula to intersect 2 columns. For instance, if I have A B C D E in column A and K E T A M R G in Column B, I'd like A E in column C Thanks for your answer □ Hi Christophe, I'm sorry but your task is not clear. For me to be able to help you, please share a small sample spreadsheet with us (support@apps4gs.com) with 2 sheets: (1) a copy of your source data (2) the result you expect to get (the result sheet is of great importance and often gives us a better understanding than any text description). I kindly ask you to shorten the tables to 10-20 Note. We keep that Google account for file sharing only and don't monitor its Inbox. Please do not email there. Once you share the file, just confirm by replying to this comment. 31. Thank you for this!! I needed to use conditional formatting to match two columns, could not figure it out for the life of me, and your explanation has made it so simple :) □ Glad to know it helped, Kate! You're most welcome! Post a comment
{"url":"http://openhousechannel.com/index-47.html","timestamp":"2024-11-14T14:20:19Z","content_type":"text/html","content_length":"277840","record_id":"<urn:uuid:40558af4-0aab-46b8-9f0e-a3462f489cc3>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00849.warc.gz"}
Bypassing the Normalization Get familiar with the concept of bypassing the normalization process. We'll cover the following The angle $\theta$ controls the probabilities of measuring the qubit in either state 0 or 1. Therefore, $\theta$ also determines $\alpha$ and $\beta$. Let’s take a look at the figure 2-dimensional qubit system. Any valid qubit state vector must be normalized: $\alpha^2 + \beta^2 = 1$ It states that vectors have the same magnitude (length). Since they all originate in the center, they form a circle with a radius of their magnitude, that is,half of the circle diameter. In such a situation, Thales’ theorem states that if two conditions, where the first condition states that A, B, and C are distinct points on a circle, and the second condition states that the line AC is a diameter, are met, then the angle $\angle ABC$ (the angle at point B) is right. In our case, the heads of $|0\rangle$, $|1\rangle$, and $|\psi\rangle$ represent the points A, B, and C, respectively. This satisfies the first condition. The line between $|0\rangle$ and $|1\rangle$ is the diameter, which satisfies the second condition. Therefore, the angle at the head of $|\psi\rangle$ is a right angle. Now, the Pythagorean theorem states that the area of the square whose side is opposite the right angle (hypotenuse, $c$) is equal to the sum of the areas of the squares on the other two sides (legs $a$, $b$). When looking at the figure 2-dimensional qubit system, again, we can see that $\alpha$ and $\beta$ are the two legs of the rectangular triangle and the diameter of the circle is the hypotenuse. Therefore, we can insert the normalization as follows: The diameter $c$ is two times the radius and is therefore two times the magnitude of any vector $|\psi\rangle$. The length of $|\psi\rangle$ is thus $\frac{c}{2}=\frac{1}{2}$. Since all qubit state vectors have the same length, including $|0\rangle$ and $|1\rangle$, there are two isosceles triangles ($\triangle M|0\rangle|\psi\rangle$ and $\triangle M|\psi\rangle|1\rangle$ Let’s look at the following figure. Get hands-on with 1400+ tech skills courses.
{"url":"https://www.educative.io/courses/hands-on-quantum-machine-learning-python/bypassing-the-normalization","timestamp":"2024-11-14T15:13:23Z","content_type":"text/html","content_length":"916729","record_id":"<urn:uuid:307f1ffa-32c3-4bc0-ae9d-ec002146a2e0>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00596.warc.gz"}
Explanation of graphs involving capacitors (charging/discharging) • Thread starter Mutaja • Start date In summary, the conversation discusses a circuit with three capacitors and a switch, where the voltage across each component is plotted. It is observed that when the switch is in the second position, the capacitor C1 discharges through R3, while the capacitor C2 starts with a negative voltage and the voltage across R3 is higher than 4V. The conversation also touches on the concept of relativity and how it affects the voltage readings on the graph. Homework Statement I've tested the circuit above, when the switch is in the 2nd position (not the one on the picture) and got the below graph from the plotted data I received. The capacitor C1 has been charged to 4V, and will start to discharge through R3. I'll have to explain this graph in my report. The blue graph is the discharging of C1, the green graph is the charging of C2 and lastly, the red graph is the voltage across R3. Why is the green graph starting below 0 V? And why is the red line starting above 4V? The x-axis of the graph represent the time axis, and the y-axis represent the voltage. Homework Equations The Attempt at a Solution I believe that the capacitor C2 (green graph) is below 0 because it hasn't been short circuited (discharged properly). But why is it affecting the graph with negative voltage, and not positive? Also, the voltage across R3 (red graph) I believe is affected by the negative voltage of the capacitor C2. When the capacitor C1 starts to discharge, C2 will have no voltage across it. All of the voltage in this circuit will be across R1. How can the capacitor C2 be "charged" (from not being properly discharged) with negative value? How can I explain this properly? Are they all supposed to be on the same plot? What I mean is, are they all being plotted simultaneously or are you just trying to superimpose three separate plots as one? If it's the latter, it could be that the Voltage amplitudes don't actually correspond. It would make sense to me that each capacitor limits at 2V as t gets larger if the total is 4V initially. Considering the capacitors are the same value, the voltage at t=0 in your plot should be evenly divided across the two. You may be right that it's just the capacitor was not properly discharged initially. Last edited: jaytech said: Are they all supposed to be on the same plot? What I mean is, are they all being plotted simultaneously or are you just trying to superimpose three separate plots as one? If it's the latter, it could be that the Voltage amplitudes don't actually correspond. It would make sense to me that each capacitor limits at 2V as t geats large if the total is 4V initially. Considering the capacitors are the same value, the voltage at t=0 in your plot should be evenly divided across the two. All the data is being logged simultaneously. It was all logged within the same session. The voltage is evenly divided between the two capacitors, but one is starting at a negative voltage as shown by green line in the graph. This can be because the C2 capacitor wasn't properly discharged before use. Why, though, it the voltage across R3 starting at more than 4V? Can the not properly discharged C2 capacitor (meaning it initially has a charge of around 50mV or so (roughly estimated from the graph)) affect the circuit in such way that it gives 500mV extra to the R3? Can the C2 capacitor be charged with negative voltage, that the C1 capacitor has to "make up for" - therefore having to discharge more than 2V through R3? I'm confused. Yes that is what appears to be happening. Since C2 is not fully discharged upon altering the switch, it looks to R3 and C1 as having a "negative" voltage. Remember that the reference voltage is 0V. The initial voltage at t=0 in the new configuration goes up to bring the total to 4V across R3 initially, and upon discharging, C2 discharges to just below 2V in order to compensate. If you had an initial 0V on C2, you would have seen both caps meet at 2V and the initial voltage across R3 as 4V. jaytech said: The initial voltage at t=0 in the new configuration goes up to bring the total to 4V across R3 initially This might be unnecessary, but the total voltage goes above 4V across R3. This is because C2 isn't properly discharged,and the total voltage in the circuit is therefore V + V which equals V , that again equals to just above 4V. What I'm still confused about is why the voltage across C2 is negative. If it was identically above 0V as it is below now, I would've understood why R3 is 4V (obviously, the graphs for voltage across C1 and C2 would've flattened out at just above 2V). That would be logical, wouldn't it? It's charged with a small amount of voltage that affects the circuit. In my graph, C2 looks to be discharged beyond 0V, which frankly is impossible? It's all relative, that's what I'm saying. It isn't charged below 0V with respect to where the switch is initially. It appears charged with a positive V here. It's when you flip the switch to position 2 that it appears to be below 0V. This is because the relative voltage (ground) of the initial configuration remains the same in configuration 2. C[2] believes it has a "negative supply" with respect to this ground, but the total voltage on R[3] will still be V[c1] + V[c2] because that is the total voltage in the loop. The sign is a result of how the current appears from ground. jaytech said: It's all relative, that's what I'm saying. It isn't charged below 0V with respect to where the switch is initially. It appears charged with a positive V here. It's when you flip the switch to position 2 that it appears to be below 0V. This is because the relative voltage (ground) of the initial configuration remains the same in configuration 2. C[2] believes it has a "negative supply" with respect to this ground, but the total voltage on R[3] will still be V[c1] + V[c2] because that is the total voltage in the loop. The sign is a result of how the current appears from ground. Thanks so much for your help. The only think I don't quite understand is how the relativity, or ground, can change with the flick of the switch. I think however I can find an explanation on that on google, so I won't bother you Thanks for taking your time to explain this, I really appreciate it. Mutaja, it does appear that your C2 started off with an initial negative charge. It is only small, but it's apparently there. I am mystified as to how it could come about, as being electrolytic capacitors, it is unlikely to have experienced a negative voltage in any properly operating circuit. Maybe the students who constructed the circuit in a class before yours had managed to get that cap around the wrong way and left it with reversed charge? Maybe your group failed to observe its correct polarity? A small positive charge remaining from previous use would be seen as negative charge if the cap was accidently reversed. With just 4 volts, a large electrolytic could possibly safely withstand reverse polarity for a short while...I like the thoroughness of your questioning. Keep it up! NascentOxygen said: Mutaja, it does appear that your C2 started off with an initial negative charge. It is only small, but it's apparently there. I am mystified as to how it could come about, as being electrolytic capacitors, it is unlikely to have experienced a negative voltage in any properly operating circuit. Maybe the students who constructed the circuit in a class before yours had managed to get that cap around the wrong way and left it with reversed charge? Maybe your group failed to observe its correct polarity? A small positive charge remaining from previous use would be seen as negative charge if the cap was accidently reversed. With just 4 volts, a large electrolytic could possibly safely withstand reverse polarity for a short while... I like the thoroughness of your questioning. Keep it up! Thank you, Sir. This was the explanation I went with, in addition to my own views and opinions I guess, but the way you explain it was very logical in my opinion. Well, I think there's a fine line between being considered thoroughly (?) and annoying and repetitive I've also had a bunch of work thrown in my face lately, so I haven't been able to reply quickly all the time, but I guess that's what a forum is for. Answer at your own speed. Thanks again for your reply. It was refreshing to get other peoples view on this, and it helped me out a lot. I really appreciate it. FAQ: Explanation of graphs involving capacitors (charging/discharging) 1. What is a capacitor and how does it work? A capacitor is an electronic component that stores electrical energy in the form of an electric field. It consists of two conductive plates separated by a dielectric material. When a voltage is applied across the plates, one plate becomes positively charged and the other becomes negatively charged, creating an electric field between them. This field allows the capacitor to store energy. 2. How does a capacitor charge and discharge? When a capacitor is connected to a power source, such as a battery, the plates become charged. Electrons from the negative plate are pushed towards the positive plate, creating a potential difference. This process is known as charging. When the power source is removed, the capacitor discharges, releasing the stored energy as the electrons flow back towards the negative plate. 3. What is the relationship between voltage and charge in a capacitor? The voltage across a capacitor is directly proportional to the amount of charge stored on its plates. This relationship can be represented by the equation V = Q/C, where V is the voltage, Q is the charge, and C is the capacitance of the capacitor. As the charge increases, the voltage across the capacitor also increases. 4. How does the capacitance affect the charging and discharging of a capacitor? The capacitance of a capacitor is a measure of its ability to store charge. A higher capacitance means the capacitor can store more charge, and therefore can hold more energy. This affects the time it takes for the capacitor to charge and discharge. A higher capacitance will result in a longer charging and discharging time compared to a lower capacitance. 5. What factors can affect the charging and discharging of a capacitor? The charging and discharging of a capacitor can be affected by several factors, including the capacitance, voltage, and resistance in the circuit. The type of dielectric material used in the capacitor can also affect its charging and discharging properties. Additionally, the size and distance between the plates can impact the capacitance and therefore, the charging and discharging behavior of the capacitor.
{"url":"https://www.physicsforums.com/threads/explanation-of-graphs-involving-capacitors-charging-discharging.745398/","timestamp":"2024-11-09T09:16:35Z","content_type":"text/html","content_length":"121948","record_id":"<urn:uuid:ecb8daa3-62b3-4f43-8aba-56800fce37fa>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00073.warc.gz"}
Chi-square statistic - (Biostatistics) - Vocab, Definition, Explanations | Fiveable Chi-square statistic from class: The chi-square statistic is a measure used in statistics to determine if there is a significant association between categorical variables. It compares the observed frequencies of events with the expected frequencies under the null hypothesis, helping to identify whether any deviations are due to chance or indicate a real relationship between the variables involved. congrats on reading the definition of chi-square statistic. now let's actually learn it. 5 Must Know Facts For Your Next Test 1. The chi-square statistic is calculated using the formula $$ ext{χ}^2 = rac{(O - E)^2}{E}$$, where O represents the observed frequency and E represents the expected frequency. 2. A higher chi-square statistic indicates a greater difference between observed and expected values, suggesting a stronger association between variables. 3. Chi-square tests are commonly used in log-linear models to assess how well a proposed model fits the observed data in multi-way contingency tables. 4. For a valid chi-square test, the sample size should be large enough, and expected frequencies should generally be 5 or more to ensure accurate results. 5. Chi-square statistics can be used for goodness-of-fit tests as well as tests of independence to evaluate different types of relationships among categorical data. Review Questions • How does the chi-square statistic contribute to understanding relationships between categorical variables in multi-way contingency tables? □ The chi-square statistic helps assess whether there is a significant association between categorical variables by comparing observed frequencies to expected frequencies in multi-way contingency tables. A calculated chi-square value indicates how much the actual data deviates from what would be expected if there were no relationship. By analyzing these deviations, researchers can identify patterns and determine if the relationship between the variables is statistically significant. • Discuss the importance of expected frequencies when calculating the chi-square statistic and its impact on statistical inference. □ Expected frequencies are crucial in calculating the chi-square statistic as they serve as a benchmark for comparison against observed frequencies. If expected frequencies are too low, it can lead to inaccurate conclusions about statistical significance. Ensuring that each expected frequency is 5 or greater helps maintain the validity of the chi-square test, enabling reliable inference about associations between variables in contingency tables. • Evaluate how log-linear models utilize the chi-square statistic in analyzing multi-way contingency tables and improving model fit. □ Log-linear models use the chi-square statistic to evaluate how well a proposed model represents the relationships among categorical variables in multi-way contingency tables. By assessing discrepancies between observed and expected counts, researchers can refine their models to better explain data patterns. A significant chi-square value suggests that adjustments are needed, while a non-significant value indicates that the model adequately fits the data, guiding further analysis and interpretation of complex interactions among variables. © 2024 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
{"url":"https://library.fiveable.me/key-terms/biostatistics/chi-square-statistic","timestamp":"2024-11-06T05:32:41Z","content_type":"text/html","content_length":"158126","record_id":"<urn:uuid:b78b2bc0-2761-4ace-b6b4-e53c2201ded8>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00267.warc.gz"}
"Two cats sit on a roof. Which falls off… “Two cats sit on a roof. Which falls off first?”/“The one with the smaller mu.” A popular math joke is: Q: Two cats are sitting on a sloping roof. Which one falls off first? A: The one with the smallest mu. The Greek letter mu stands for the coefficient of friction. The cat with the smallest mu—the least friction—will fall off the roof first. Also, cats say “mu” or “meow.” The math joke has been cited in print since at least 1993. Wikipedia: Coefficient of friction A coefficient of friction is a value that shows the relationship between the force of friction between two objects and the normal force between the objects. It is a value that is used in physics sometimes to find an objects normal force or frictional force, when other methods aren’t available. The coefficient of friction is shown by f = \mu F_{n}\,. In that equation, f is the frictional force, \mu is the coefficient of friction, and F_{n}\, is the normal force. Google Groups: eunet.jokes I apologise on behalf of my flatmate for the following… Two cats are sitting on a sloping roof. Which one falls off first? The one with the least mu. Google Groups: rec.humor Funny Name: Football Player D.J. Wischik Q. Two cats are sitting on a tin roof, and one falls off. Which one stays on? A. The one with the larges mew. (greek letter mu = coefficient of friction) Google Groups: eunet.jokes Just a quickie, with a longie attached Paul G Roberts BEWARE: Specialist Joke. I’ve been reading this splendid forum for ooh ages now (two days) and I would like to offer up the following, which has probably been seen before. Q: Two cats are sitting on a shed roof. Which one falls off first? A: The one with the smaller mu. Two Cats on a Roof (idea) by The Big D Sat Nov 13 1999 at 10:24:38 Which one falls off first? The one with the smaller mew. he he. ha. ahh. mm. Well, in England, at anyrate, the Greek Letter mu is used to denote the coefficient of friction. sorry about that - have a look at some other geek jokes. The Geek Culture Forums Terrible Maths Joke Posted by spungo (Member # 1089) on April 22, 2004, 09:16: Another crap maths joke: Two cats sitting on an inclined roof - which one falls off first? ans: the one with the smallest mu Marcus du Sautoy Two cats on a roof. Which one slips off first? The one with the smallest mu. (mu is the symbol for coefficient of friction) 7:29 PM - 14 May 2009 Math Jokes Explained - Numberphile Published on May 20, 2013 Some of your favourite maths jokes are dissected in forensic fashion. “Two cats are standing on a roof. Which one falls off first? The one with the smaller mu.” Nerdy Jokes May 7, 2015 A Furry Friction Funny By LC Q. Two cats are sitting on a roof. Which one slides off first? A. The one with the smaller mu!
{"url":"https://barrypopik.com/blog/two_cats_sit_on_a_froof","timestamp":"2024-11-09T16:19:17Z","content_type":"text/html","content_length":"15412","record_id":"<urn:uuid:3b0e2ff3-f046-42b3-851c-91680749ff9f>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00374.warc.gz"}
Foundations of Modern AI - Registration - Opening Tutorial: Perspectives on learning in games - Part I (slides) Gabriele Farina, MIT Abstract: The main focus of the tutorial will be about learning in games: The first part of the tutorial will be based on no-regret learning algorithms such as regret matching, FTRL, OMD and 9:00 and their optimistic variants for normal-form games. The second part of the tutorial will be about sequential games, build up to imperfect-information games, regret decomposition, and if time - permits, kernelization. 10:20 Short Bio: Gabriele Farina is an Assistant Professor at MIT in EECS and LIDS, additionally affiliated with the Operations Research Center (ORC). He holds the X-Window Consortium Career Development Chair. Before that, he spent a year as a Research Scientist at FAIR (Meta AI), where he worked on Cicero, a human-level AI agent combining strategic reasoning and natural language. Before that, he was a Ph.D. student in the Computer Science Department at Carnegie Mellon University, where he worked with Tuomas Sandholm. He was supported by a 2019-2020 Facebook Fellowship in the area of Economics and Computation and he is the recipient of ACM SIGecom Doctoral Dissertation Award and NeurIPS 2020 Best Paper Award. - Coffee break Non-clairvoyant scheduling with (untrusted) predictions Evripidis Bampis, Sorbonne University Abstract: We revisit classical non-clairvoyant scheduling problems where the processing time of each job is not known until the job finishes. We adopt the framework of learning-augmented 10:50 algorithms and we study the question of whether (possibly erroneous) predictions may help design algorithms with a competitive ratio which is good when the prediction is accurate (consistency), - deteriorates gradually with respect to the prediction error (smoothness), and not too bad and bounded when the prediction is arbitrarily bad (robustness). 11:30 Short Bio: Evripidis Bampis has received his Diploma of Electrical Engineering from the National Technical University of Athens in 1989, and his Msc and PhD from the University of Paris-Sud (Orsay) in 1990 and 1993, respectively. He joined the department of Computer Science of the University of Evry (France) in September 1993, and he served as an Assistant Professor from 1993 to 1998 and then as a Professor. He was leading the Optimization and Algorithms group at the IBISC laboratory from 1998 to 2010. He was also the director of the School of Doctoral Studies. Since September 2010, he has been appointed at UPMC, now Sorbonne University, as a Professor, and he is a member of the Operations Research group of the laboratory LIP6. He served as the Director of the Department of the Master in Computer Science at Sorbonne University from 2017 to 2020. His main research interests concern the design and the analysis of algorithms for scheduling and graph problems. He is in particular interested in approximation and on-line algorithms, algorithmic game theory, and learning augmented algorithms. He was the coordinator or participant of various national, european, or international projects. - Short break A proof of the Nisan-Ronen conjecture (slides) Annamaria Kovacs, Goethe University Frankfurt 11:45 Abstract: We show that the best approximation ratio of deterministic truthful mechanisms for makespan-minimization for $n$ unrelated machines is $n$, as it was conjectured by Noam Nisan and - Amir Ronen. 12:10 Short Bio: Annamaria studied mathematics at Eötvös Lorand University, Budapest, Hungary. She received her PhD in Theoretical Computer Science from Saarland University / Max Planck Institute for Informatics in Saarbrücken, Germany in 2007. She has been a research associate at Goethe University Frankfurt since 2012. Annamaria's research focus is in the field of scheduling algorithms and algorithmic mechanism design. Her results include: the first deterministic truthful PTAS for related machine scheduling, a seminal paper on simultaneous item bidding auctions, and the proof of the Nisan Ronen conjecture. The Complexity of Equilibrium Computation in First-Price Auctions (slides) Aris Filos-Ratsikas, University of Edinburgh Abstract: In a First-Price auction, there is one item for sale and a set of bidders who submit their bids for the item; the winner is the bidder with the highest bid, and the payment is that bidder's bid. It is well-known that this auction format, while being fundamental and widely used in practice, provides incentives to the bidders to underbid, aiming to win the item at a favourable price. In this talk, I will present results on the computational complexity of computing Bayes-Nash equilibria of the First Price auction, when the prior beliefs about other bidders' values come from 12:10 either continuous or discrete distributions. For the case of subjective priors, we show that it is PPAD-complete to compute either pure equilibria for continuous priors or mixed equilibria for - discrete priors, and in fact that there is an inherent computational equivalence between those two settings. For pure equilibria and discrete subjective priors, we show that the problem of 12:35 deciding equilibrium existence is NP-complete. I will also present some positive results for the case of symmetric equilibria and iid bidders. From joint works with Yiannis Giannakopoulos, Alexandros Hollender, Philip Lazos, and Diogo Pocas, and with Yiannis Giannakopoulos, Alexandros Hollender, and Charalampos Kokkalis. Short Bio: Aris Filos-Ratsikas is a Lecturer (Assistant Professor) at the University of Edinburgh. His research lies in the intersection of theoretical computer science and artificial intelligence, with an emphasis on algorithms and computational complexity. In particular, he is interested in problems related to social choice theory, fair division, competitive markets, game theory and mechanism design. He obtained his PhD degree from the Computer Science Department of Aarhus University, Denmark in 2015, under the supervision of Peter Bro Miltersen. In the past, he was a Lecturer at the University of Liverpool, a postdoctoral researcher at École Polytechnique Fédérale de Lausanne, and a postdoctoral research assistant at the University of Oxford. Constant Inapproximability for Fisher Markets (slides) Argyrios Deligkas, Royal Holloway University of London Abstract: We study the problem of computing approximate market equilibria in Fisher markets with separable piecewise-linear concave (SPLC) utility functions. In this setting, the problem was 12:35 only known to be PPAD-complete for inverse-polynomial approximations. We strengthen this result by showing PPAD-hardness for constant approximations. This means that the problem does not admit - a polynomial time approximation scheme (PTAS) unless PPAD=P. In fact, we prove that computing any approximation better than 1/11 is PPAD-complete. As a direct byproduct of our main result, we 13:00 get the same inapproximability bound for Arrow-Debreu exchange markets with SPLC utility functions. Joint work with John Fearnley, Alexandros Hollender, and Themistoklis Melissourgos. Accepted for publication at EC 2024. Short Bio: Argyrios Deligkas is a Senior Lecturer (Associate Professor) at Royal Holloway University of London. He received his PhD from the University of Liverpool. His research interests span over Econ-CS, with a focus on equilibrium computation problems, Computational Social Choice, and Temporal Networks. - Lunch Break Tutorial: Wonders of high-dimensions: the Maths and Physics of Machine Learning - Part I Bruno Loureiro, 14:30 ENS and CNRS 15:50 Abstract: The past decade has witnessed a surge in the development and adoption of machine learning algorithms to solve day-a-day computational tasks. Yet, a solid theoretical understanding of even the most basic tools used in practice is still lacking, as traditional statistical learning methods are unfit to deal with the modern regime in which the number of model parameters are of the same order as the quantity of data – a problem known as the curse of dimensionality. Curiously, this is precisely the regime studied by Physicists since the mid 19th century in the context of interacting many-particle systems. This connection, which was first established in the seminal work of Elisabeth Gardner and Bernard Derrida in the 80s, is the basis of a long and fruitful marriage between these two fields. The goal of this tutorial is to provide an in-depth overview of these connections, from the early days to more recent developments. After a brief historical summary, we will introduce some useful techniques from the statistical physics toolbox through the paradigmatic study of shallow neural works, starting from their generalisation properties at initialisation (and its connection to kernel methods) to the feature learning regime after few steps of training. Short Bio: Bruno Loureiro is currently a CNRS researcher based at the Département d’Informatique of the Ecole Normale Supérieure in Paris, working on the crossroads between machine learning and statistical mechanics. After a PhD at the University of Cambridge, he held postdoctoral positions at IPhT in Paris and EPFL in Lausanne. He is interested in Bayesian inference, theoretical machine learning and high-dimensional statistics more broadly. His research aims at understanding how data structure, optimisation algorithms and architecture design come together in successful - Coffee Break Learning from time series data using information-theoretic methods Ioannis Kontoyannis, University of Cambridge - Abstract: A hierarchical Bayesian framework is introduced for developing rich mixture models for real-valued time series, partly motivated by important applications in financial time series 17:00 analysis. The model construction combines information- theoretic ideas originally developed in the context of the context-tree weighting algorithm, with standard statistical models for time series. We call the overall construction the Bayesian Context Trees State Space Model (or BCT-X) framework. Efficient algorithms are introduced that allow for effective, exact Bayesian inference, and which can be updated sequentially, facilitating effective forecasting. The utility of the general framework is illustrated on real-world financial data, where the BCT-X framework revealed novel, natural structure present, in the form of an enhanced leverage effect that had not been identified before. This is joint work with Ioannis Papageorgiou. Short Bio: Ioannis Kontoyiannis was born in Athens, Greece, in 1972. He studied mathematics at Imperial College and Cambridge, and he received the M.S. degree in statistics, and the Ph.D. degree in electrical engineering, both from Stanford University. In 1995 he worked at IBM Research, on a NASA-IBM project. From 1998 to 2001 he was with Purdue University. Between 2000 and 2005 he was with the Division of Applied Mathematics and the Department of Computer Science at Brown University. Between 2005 and 2021 he was with the Athens University of Economics and Business. In 2009 he was a visiting professor with the Department of Statistics at Columbia University. Between 2018 and 2020 he was with the Department of Engineering of the University of Cambridge, where he was Head of the Signal Processing and Communications Laboratory. In 2020 he joined the Department of Pure Mathematics and Mathematical Statistics at the University of Cambridge, where he is the Churchill Professor of Mathematics of Information. He is a Fellow of Darwin College, Cambridge, and an Associate Member of the Signal Processing and Communications Laboratory, at the Information Engineering Division within the Department of Engineering. He has been awarded a Manning endowed assistant professorship (Brown University), a Sloan Foundation Research Fellowship, an honorary Master of Arts Degree Ad Eundem (Brown University) and a Marie Curie Fellowship. He is a Fellow of the IEEE (Institute of Electrical and Electronic Engineers), of the AAIA (Asia-Pacific Artificial Intelligence Association), and of the IMS(Institute of Mathematical Statistics). - Short break Tutorial: Contextual Reinforcement Learning without the Contexts, a.k.a., Latent Markov Decision Processes - Part I Constantine Caramanis, UT-Austin Abstract: In many interactive decision-making settings, there is latent and unobserved information that remains fixed. This is in fact the case in any interactive system where the decision-maker interacts over a time horizon with another user (or system), but does not (or cannot) know everything about that user - the unknown elements make this a partially observed problem, but a special one, since they remain fixed over each interaction horizon with the same user. Thus this is one of the fundamental unsolved settings of Reinforcement Learning. 17:10 Examples of this setting abound. Consider, for example, a dialogue system, where complete information about a user (e.g., the user's preferences) is not given. Similarly, we might consider the - medical treatment of a patient over a period of time, where again not everything about the patient, the patient's preferences or response to a given course of medical treatment, is known. In 18:30 such an environment, the latent information remains fixed throughout each episode, since the identity of the user does not change during an interaction. This type of environment can be modeled as a Latent Markov Decision Process (LMDP). This is a special instance of Partially Observed Markov Decision Processes (POMDPs). However, as we discuss in this tutorial, in many cases of interest, using results from POMDPs typically does not work. In this tutorial, we consider this important setting within the Reinforcement Learning paradigm. We are interested in understanding the modeling framework of LMDPs, the known information-theoretic lower bounds, and the established achievability results. Short Bio: Constantine Caramanis is a Professor in Electrical and Computer Engineering at UT Austin. He received the Ph.D. degree in EECS from MIT. He is a recipient of a NSF CAREER award, and is an IEEE Fellow. His research interests focus on optimization, machine learning and statistics. Tutorial: Wonders of high-dimensions: the Maths and Physics of Machine Learning - Part II (slides) Bruno Loureiro, ENS and CNRS Abstract: The past decade has witnessed a surge in the development and adoption of machine learning algorithms to solve day-a-day computational tasks. Yet, a solid theoretical understanding of even the most basic tools used in practice is still lacking, as traditional statistical learning methods are unfit to deal with the modern regime in which the number of model parameters are of the same order as the quantity of data – a problem known as the curse of dimensionality. Curiously, this is precisely the regime studied by Physicists since the mid 19th century in the context 9:00 of interacting many-particle systems. This connection, which was first established in the seminal work of Elisabeth Gardner and Bernard Derrida in the 80s, is the basis of a long and fruitful - marriage between these two fields. 10:20 The goal of this tutorial is to provide an in-depth overview of these connections, from the early days to more recent developments. After a brief historical summary, we will introduce some useful techniques from the statistical physics toolbox through the paradigmatic study of shallow neural works, starting from their generalisation properties at initialisation (and its connection to kernel methods) to the feature learning regime after few steps of training. Short Bio: Bruno Loureiro is currently a CNRS researcher based at the Département d’Informatique of the Ecole Normale Supérieure in Paris, working on the crossroads between machine learning and statistical mechanics. After a PhD at the University of Cambridge, he held postdoctoral positions at IPhT in Paris and EPFL in Lausanne. He is interested in Bayesian inference, theoretical machine learning and high-dimensional statistics more broadly. His research aims at understanding how data structure, optimisation algorithms and architecture design come together in successful - Coffee break Two stories about distortion in social choice Ioannis Caragiannis, Aarhus University 10:50 Abstract: The notion of distortion has received much attention in recent years by the computational social choice community. In general, distortion quantifies how the lack of complete - information affects the quality of the social choice outcome. Ideally, a distortion of 1 means that the social choice outcome is the most efficient one. 11:30 In the talk, we will consider two related scenarios. The first one is inspired by voting under the impartial culture assumption. We assume that agents have random values for the alternatives, drawn from a probability distribution independently for every agent-alternative pair. We explore voting rules that use a limited number of queries per agent in addition to the agent's ordinal information. For simple distributions, we present rules that always select an alternative of social welfare that is only a constant factor away from the optimal social welfare (i.e., rules of constant distortion). The second scenario is motivated by the practice of sortition. Here, we assume that agents correspond to points on a metric space. Our objective is to select, in a fair manner, a subset of the agents (corresponding to a citizens' assembly) so that for every election with alternatives from the same metric space, the most preferred alternative of the citizens' assembly has a social cost that is very close to that of the optimal alternative for the whole agent population. Our positive results indicate that assemblies of size logarithmic in the number of alternatives are sufficient to get constant distortion in this model. The talk is based on two papers that are joint works with Karl Fehrs, and with Evi Micha and Jannik Peters, respectively. Short Bio: Ioannis Caragiannis (Computer Scientist and Engineer; Diploma, 1996; PhD, 2002, University of Patras, Greece) is a Professor at the Department of Computer Science of Aarhus University, Denmark, where he also serves as the Head of the research group on Computational Complexity and Game Theory. His research interests include design and analysis of algorithms (including approximation and online algorithms), economics and computation (computational aspects of fair division, voting, matching problems, auctions, and strategic games), and foundations of machine learning and artificial intelligence (including strategic aspects of learning tasks and data processing). He has more than 190 publications in conference proceedings, journals, or as book chapters. For his research, he has received the 2024 Prize in Game Theory and Computer Science in honour of Ehud Kalai, the 2022 Artificial Intelligence Journal Prominent Paper Award, and a Distinguished Paper Honorable Mention at IJCAI 2019. - Short break Alternation makes the adversary weaker in two-player games (slides) Stratis Skoulakis, EPFL Abstract: Motivated by alternating game-play in two-player games, we study an altenating variant of the Online Linear Optimization (OLO). In alternating OLO, a learner at each round $t \in [n]$ selects a vector $x^t$ and then an adversary selects a cost-vector $c^t \in [-1,1]^n$. The learner then experiences cost $(c^t + c^{t-1})^\top x^t$ instead of $(c^t)^\top x^t$ as in standard 11:45 OLO. We establish that under this small twist, the $\Omega(\sqrt{T})$ lower bound on the regret is no longer valid. More precisely, we present two online learning algorithms for alternating OLO - that respectively admit $\mathcal{O}((\log n)^{4/3} T^{1/3})$ regret for the $n$-dimensional simplex and $\mathcal{O}(\rho \log T)$ regret for the ball of radius $\rho>0$. Our results imply 12:10 that in alternating game-play, an agent can always guarantee $\mathcal{\tilde{O}}((\log n)^{4/3} T^{1/3})$ regardless the strategies of the other agent while the regret bound improves to $\ mathcal{O}(\log T)$ in case the agent admits only two actions. Short Bio: Stratis is a postdoctoral research fellow at the Laboratory for Information and Inference Systems hosted at EPFL. His interests lie at the intersection of Game Theory, Optimization and Machine Learning. Stratis received his Ph.D. in Algorithmic Game Theory from the National Technical University of Athens under the supervision of Dimitris Fotakis. From 2019-2021 he was a postdoctoral research fellow at Singapore University of Technology. Fairly Allocating Indivisible Goods to Strategic Agents (slides) Georgios Birmpas, University of Liverpool Abstract: We consider the problem of fairly allocating a set of indivisible goods to a set of strategic agents with additive valuation functions. We assume no monetary transfers and, therefore, a mechanism in our setting is an algorithm that takes as input the reported - rather than the true - values of the agents. Our main goal is to explore whether there exist mechanisms that have pure Nash equilibria for every instance and, at the same time, provide fairness guarantees for the allocations that correspond to these equilibria. We focus on a relaxation of envy-freeness, 12:10 namely envy-freeness up to one good (EF1), and we positively answer the above question. In particular, we study an algorithm that is known to produce such allocations in the non-strategic - setting: Round-Robin. We show that all of its pure Nash equilibria induce allocations that are EF1 with respect to the underlying true values, while we also prove that surprisingly, a version 12:35 of this result holds even for agents with cancelable or submodular valuation functions. Short Bio: Georgios Birmpas is currently a lecturer (assistant professor) in the Department of Computer Science of University of Liverpool. Before that, he worked for three years as a postdoctoral researcher at the Department of Computer, Control, and Management Engineering of Sapienza University of Rome Sapienza University of Rome, and for 1.5 years as a research associate at the Department of Computer Science of University of Oxford. He got his PhD from the Department of Informatics of Athens University of Economics and Business, a MSc in Logic and Theory of Algorithms (ΜΠΛΑ) from the Department of Mathematics of University of Athens, and his Diploma from the School of Applied Mathematics and Physics of National Technical University of Athens. His research interests include Algorithmic Game Theory, Approximation Algorithms, Fair Division, and Computational Social Choice. Transfer Learning Beyond Bounded Density Ratios (slides) Alkis Kalavasis, Yale Abstract: We study the fundamental problem of transfer learning where a learning algorithm collects data from some source distribution $P$ but needs to perform well with respect to a different target distribution $Q$. A standard change of measure argument implies that transfer learning happens when the density ratio $dQ/dP$ is bounded. Yet, prior thought-provoking works by Kpotufe 12:35 and Martinet (COLT, 2018) and Hanneke and Kpotufe (NeurIPS, 2019) demonstrate cases where the ratio $dQ/dP$ is unbounded, but transfer learning is possible. In this talk, we will discuss about - a general transfer inequality over the Euclidean domain, proving that non-trivial transfer learning for low-degree polynomials is possible under very mild assumptions, going well beyond the 13:00 classical assumption that $dQ/dP$ is bounded. For instance, it always applies if $Q$ is a log-concave measure and the inverse ratio $dP/dQ$ is bounded. This is based on joint work with Ilias Zadik and Manolis Zampetakis. Short Bio: Alkis is an FDS Postdoctoral Fellow at Yale University working on learning theory and its connections with statistics, computational complexity and optimization. Before that, Alkis was a PhD student in the Computer Science Department of the National Technical University of Athens (NTUA) working with Dimitris Fotakis and Christos Tzamos. He completed his undergraduate studies in the School of Electrical and Computer Engineering of the NTUA, where he was advised by Dimitris Fotakis. - Lunch Break Tutorial: Contextual Reinforcement Learning without the Contexts, a.k.a., Latent Markov Decision Processes - Part II Constantine Caramanis, Abstract: In many interactive decision-making settings, there is latent and unobserved information that remains fixed. This is in fact the case in any interactive system where the decision-maker interacts over a time horizon with another user (or system), but does not (or cannot) know everything about that user - the unknown elements make this a partially observed 14:30 problem, but a special one, since they remain fixed over each interaction horizon with the same user. Thus this is one of the fundamental unsolved settings of Reinforcement Learning. - Examples of this setting abound. Consider, for example, a dialogue system, where complete information about a user (e.g., the user's preferences) is not given. Similarly, we might consider the 15:50 medical treatment of a patient over a period of time, where again not everything about the patient, the patient's preferences or response to a given course of medical treatment, is known. In such an environment, the latent information remains fixed throughout each episode, since the identity of the user does not change during an interaction. This type of environment can be modeled as a Latent Markov Decision Process (LMDP). This is a special instance of Partially Observed Markov Decision Processes (POMDPs). However, as we discuss in this tutorial, in many cases of interest, using results from POMDPs typically does not work. In this tutorial, we consider this important setting within the Reinforcement Learning paradigm. We are interested in understanding the modeling framework of LMDPs, the known information-theoretic lower bounds, and the established achievability results. Short Bio:Constantine Caramanis is a Professor in Electrical and Computer Engineering at UT Austin. He received the Ph.D. degree in EECS from MIT. He is a recipient of a NSF CAREER award, and is an IEEE Fellow. His research interests focus on optimization, machine learning and statistics. - Coffee Break Learning-Augmented Mechanism Design 16:20 Vasilis Gkatzelis, 17:00 Drexel University Abstract: This talk will introduce the model of "learning-augmented mechanism design" (or "mechanism design with predictions"), which is an alternative model for the design and analysis of mechanisms in strategic settings. Aiming to complement the traditional approach in computer science, which analyzes the performance of algorithms based on worst-case instances, recent work on "algorithms with predictions" has developed algorithms that are enhanced with machine-learned predictions regarding the optimal solution. The algorithms can use this information to guide their decisions and the goal is to achieve much stronger performance guarantees when these predictions are accurate (consistency) while also maintaining good worst-case guarantees, even if these predictions are very inaccurate (robustness). This talk will focus on the adaptation of this framework into mechanism design and provide an overview of some results along this line of work. Short Bio: Vasilis Gkatzelis is an associate professor of computer science at the College of Computing & Informatics of Drexel University, and he is a recipient of the NSF CAREER award. Prior to joining Drexel, he held positions as a postdoctoral scholar at the computer science departments of UC Berkeley and Stanford University, and as a research fellow at the Simons Institute for the Theory of Computing. He received his PhD from the Courant Institute of New York University and his research focuses on problems in algorithmic game theory and approximation algorithms. - Short break Tutorial: Perspectives on learning in games - Part II (slides) Gabriele Farina, MIT Abstract: The main focus of the tutorial will be about learning in games: The first part of the tutorial will be based on no-regret learning algorithms such as regret matching, FTRL, OMD and 17:10 and their optimistic variants for normal-form games. The second part of the tutorial will be about sequential games, build up to imperfect-information games, regret decomposition, and if time - permits, kernelization. 18:30 Short Bio: Gabriele Farina is an Assistant Professor at MIT in EECS and LIDS, additionally affiliated with the Operations Research Center (ORC). He holds the X-Window Consortium Career Development Chair. Before that, he spent a year as a Research Scientist at FAIR (Meta AI), where he worked on Cicero, a human-level AI agent combining strategic reasoning and natural language. Before that, he was a Ph.D. student in the Computer Science Department at Carnegie Mellon University, where he worked with Tuomas Sandholm. He was supported by a 2019-2020 Facebook Fellowship in the area of Economics and Computation and he is the recipient of ACM SIGecom Doctoral Dissertation Award and NeurIPS 2020 Best Paper Award.
{"url":"https://www.corelab.ntua.gr/aifoundations2024/","timestamp":"2024-11-04T07:13:36Z","content_type":"text/html","content_length":"56263","record_id":"<urn:uuid:b1729f9a-be82-42e0-9ec2-35e8c41b3423>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00514.warc.gz"}
Add Subtract Multiply Divide Mixed Numbers Worksheet Add Subtract Multiply Divide Mixed Numbers Worksheet – There’s a wealth of evidence to suggest that number worksheets can assist children develop their math skills. This article will concentrate on the importance of number worksheets for children. We will discuss the benefits as well as the different kinds of number worksheets. Also, we will look at two case studies that show how number worksheets helped students improve their math skills within just a short period. Purpose of Using a Numbers Worksheet and How It Helps Educators A numbers worksheet is used to help students practice the fundamental math skills they learned in class. Students could use it for private practice or group activities. Students may also use it to assess student understanding of the topic. The worksheet on numbers can help educators give a quick and simple assessment of students’ understanding of particular math skills. Additionally, teachers can employ these worksheets to check that students are on track to their learning objectives and adjust as needed. 5 Effective Ways You Can Use a Numbers Worksheet to Teach Children Math A worksheet on numbers is a piece of paper that has rows and columns used to teach children math. They are often used in the elementary schools. This article will present five ways you can use an activity on numbers to teach kids math. The first option is using the child to copy the numbers from the top row into the corresponding column. Another method is colouring each number that matches the color of its corresponding column. The right-hand side is the best option. Another method is counting out loud as they fill in each row independently or with the help of an adult. Fourthly, the method is to use the number grid and then filling in each number that matches its position on the line, beginning with zero and going on until they are nine. Final Thoughts on the Numbers Worksheet We hope that this blog will help you understand the number worksheet and how to use it in your work.
{"url":"https://www.alphabetworksheetsfree.com/add-subtract-multiply-divide-mixed-numbers-worksheet/","timestamp":"2024-11-03T04:33:33Z","content_type":"text/html","content_length":"61420","record_id":"<urn:uuid:7fa8314d-0a75-4ea5-9354-11c6319b88ed>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00537.warc.gz"}
Weil uniformization theorem 1. created Weil uniformization theorem with a brief idea of the statement and a pointer to the decent review Sorger 99 2. added the statement in a bit more detail to moduli space of bundles (but it still needs more discussion there); 3. used the pointer to the new entry (in place of some previous text) to streamline the function field analogy – table a little bit. replaced stale link with a working one Mamuka Jibladze diff, v6, current
{"url":"https://nforum.ncatlab.org/discussion/6132/","timestamp":"2024-11-02T05:26:00Z","content_type":"application/xhtml+xml","content_length":"14005","record_id":"<urn:uuid:7c3b3687-6a0b-4837-a361-f276eba54749>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00853.warc.gz"}
Problem A Baking bread is my favourite spare-time pursuit. I have a number of stainless steel mixing bowls with straight sides, a circular bottom and a wider circular top opening. Geometrically, my bowls are truncated circular cones and for this problem, the thickness of the metal may be disregarded. I store these bowls stacked in the natural way, that is with a common vertical axis, and I stack them in an order that minimises the total height of the stack. Finding this minimum is the purpose of your program. On the first line of the input is a positive integer, telling the number of test cases to follow (at most $10$). Each case starts with one line containing an integer $n$, the number of bowls ($2\leq n \leq 9$). The following $n$ lines each contain three positive integers $h, r, R$, specifying the height, the bottom radius and the top radius of the bowl, and $r<R$ holds true. You may also assume that $h,r,R<1000$. For each test case, output one line containing the minimal stack height, truncated to an integer (note: truncated, not rounded). Sample Input 1 Sample Output 1
{"url":"https://kth.kattis.com/courses/DD2458/popup17/assignments/kum7c8/problems/bowlstack","timestamp":"2024-11-05T10:43:55Z","content_type":"text/html","content_length":"26578","record_id":"<urn:uuid:233b435c-0391-4983-a8f1-ec904d62a90d>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00897.warc.gz"}
Common Algebra Mistakes - My Maths GuyMy Maths Guy - Online Math Courses & Course Bundles Avoid these common algebra mistakes and stop throwing away marks on homework’s, assessments and exams. Never waste these marks again! The first thing I check with all of my students is their algebra. No matter how well they understand and can apply theorems, formulas or rules, they must have solid algebra. Algebra is the grammar and puntuation of Math. In English you can’t write a great essay with bad grammar and punctuation, regardless of how good the topic is. In Math, you can’t solve problems without good use of algebra. Some algebra takes time and experience to develop. However, there are common algebra mistakes that I see many times every day. You can avoid making these now and never waste those marks again. Any questions about this video drop us a message HERE. Key Points Cancelling in a fraction means dividing the numerator and the denominator by the same value To square a bracket write the bracket twice and multiply them together Quadratic equations are solved by factorising and separating into two equations Don’t confuse quadratic equations with linear equations which use a different technique When you square a negative the answer should be positive To square a negative use a bracket and put the squared outside the bracket Learn more about our ALGEBRA 1 course here
{"url":"https://www.mymathsguy.com/common-algebra-mistakes/","timestamp":"2024-11-13T06:39:06Z","content_type":"text/html","content_length":"75676","record_id":"<urn:uuid:17e0b6ec-5de7-4588-b18c-6930c52eda77>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00584.warc.gz"}
Mechanics of Materials 9.2 Strain Gages In Figure (a) strain gages 1 and 2 recorded strains of 700 and -800 micro, respectively. The normal strains are In Figure (b) strain gages 1 and 2 recorded strains of 700 and -800 micro, respectively. The normal strains are A point in plane stress has the Mohr circle for strain shown in Figure (g). The modulus of elasticity of the material is E = 10,000 ksi and Poisson’s ratio is 0.25. The maximum shear strain is A point in plane stress has the Mohr circle for strain shown in Figure (h). The modulus of elasticity of the material is E = 10,000 ksi and Poisson’s ratio is 0.25. The maximum shear strain is In plane strain there are two principal strains, but in plane stress there are three principal strains. The principal coordinate axis for stresses and strains is always the same, irrespective of the stress–strain relationship. Do you have any comments or suggestions? Please send them to author@madhuvable.org
{"url":"https://madhuvable.org/self-tests-2/introductory-mechanics-of-materials/9-2-strain-gages/","timestamp":"2024-11-06T05:39:16Z","content_type":"text/html","content_length":"82108","record_id":"<urn:uuid:0615ecaa-1279-449f-b820-dd2111a142c5>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00575.warc.gz"}