content
stringlengths
86
994k
meta
stringlengths
288
619
How to test time series autocorrelation in STATA - Datapott Analytics This article shows a testing serial correlation of errors or time series autocorrelation in STATA. An autocorrelation problem arises when error terms in a regression model correlate over time or are dependent on each other. Why test for autocorrelation? It is one of the main assumptions of the OLS estimator according to the Gauss-Markov theorem that in a regression model: Cov(ϵ_(i,) ϵ_j )=0 ∀i,j,i≠j, where Cov is the covariance and ϵ is the residual. The presence of autocorrelation in the data causes and correlates with each other and violates the assumption, showing bias in the OLS estimator. It is therefore important to test for autocorrelation and apply corrective measures if it is present. This article focuses on two common tests for autocorrelation; the Durbin Watson D test and the Breusch Godfrey LM test. Like the previous article ( Heteroscedasticity test in STATA for time series data), first run the regression with the same three variables Gross Domestic Product (GDP), Private Final Consumption (PFC) and Gross Fixed Capital Formation (GFC) for the time period 1997 to 2018. Durbin Watson test for autocorrelation Durbin Watson’s test depends upon 2 quantities; the number of observations and the number of parameters to test. In the dataset, the number of observations is 84 and the number of parameters is 2 (GFC and PFC). In the Durbin-Watson table two numbers are present– dl and du. These are the “critical values” (figure below). Figure 1: Critical values of Durbin Watson test for testing autocorrelation in STATA Durbin Watson’s statistic ranges from 0 to 4. As the above scale shows, a statistics value between 0 to dl represents positive serial autocorrelation. Values between dl and du; 4-du and 4-dl indicate serial correlation cannot be determined. The value between du and 4-du represents no autocorrelation. Finally, the value between 4-dl and 4 indicates a negative serial correlation at a 95% confidence Command for the Durbin Watson test is as follows: However, STATA does not provide the corresponding p-value. To obtain the Durbin-Watson test statistics from the table conclude whether the serial correlation exists or not. Download the Durbin Watson D table here. Figure 2: Durbin Watson test statistics table for testing autocorrelation in STATA In the above figure, the rows show the number of observations and the columns represents the “k” number of parameters. Here the number of parameters is 2 and the number of observations is 84. Durbin Watson’s lower limit from the table (dl) = 1.600 Durbin Watson’s upper limit from the table (du) = 1.696 Therefore, when du and dl are plotted on the scale, the results are as follows (figure below). Figure 3: Results of Durbin Watson test Durbin Watson d statistics from the STATA command is 2.494, which lies between 4-dl and 4, implying there is a negative serial correlation between the residuals in the model. Breusch-Godfrey LM test for autocorrelation The Breusch-Godfrey LM test has an advantage over the classical Durbin-Watson D test. The Durbin-Watson test relies upon the assumption that the distribution of residuals is normal whereas the Breusch-Godfrey LM test is less sensitive to this assumption. Another advantage of this test is that it allows researchers to test for serial correlation through a number of lags besides one lag which is a correlation between the residuals between time t and t-k (where k is the number of lags). This is unlike the Durbin-Watson test which allows testing for only correlation between t and t-1. Therefore if k is 1, then the results of the Breusch-Godfrey test and Durbin-Watson test will be the same. Follow the below command for the Breusch Godfrey LM test in STATA. estat bgodfrey The following results will appear as shown below. Figure 4: Results of Breusch-Godfrey LM test for autocorrelation in STATA The hypothesis in this case is: • Null hypothesis: There is no serial correlation. • Alternative Hypothesis: There is a serial correlation. Since from the above table, chi2 is less than 0.05 or 5%, the null hypothesis can be rejected. In other words, there is a serial correlation between the residuals in the model. Therefore correct for the violation of the assumption of no serial correlation. Correction for autocorrelation To correct the autocorrelation problem, use the ‘prais’ command instead of regression (same as when running regression), and the ‘corc’ command at last after the names of the variables. Below is the command for correcting autocorrelation. prais gdp gfcf pfce, corc The below results will appear. Figure 3: Regression results with correction of autocorrelation in STATA At the end of the results, finally, calculate original and new Durbin Watson statistics as follows. Figure 4: Calculation of original and new Durbin Watson statistics for autocorrelation in STATA The New D-W statistic value is 2.0578 which lies between du and 4-du, implying that there is no autocorrelation now. Thus it has been corrected. Furthermore, the next article discusses the issue of multicollinearity. Multicollinearity arises when two or more two explanatory variables in the regression model highly correlate with each other.
{"url":"https://datapott.com/how-to-test-time-series-autocorrelation-in-stata-3/","timestamp":"2024-11-12T22:51:30Z","content_type":"text/html","content_length":"113947","record_id":"<urn:uuid:01ad5848-932e-4427-bed5-fa717ccadf81>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00423.warc.gz"}
Fundamental Theorems of Calculus. MVT for Integrals | JustToThePointFundamental Theorems of Calculus. MVT for Integrals “If you hear a voice within you say ‘you cannot paint’, then by all means paint and that voice will be silenced,” Vincent van Gogh. Antiderivatives are fundamental concepts in calculus. They are the inverse operation of derivatives. Given a function f(x), an antiderivative, also known as indefinite integral, F, is the function that can be differentiated to obtain the original function, that is, F’ = f, e.g., 3x^2 -1 is the antiderivative of x^3 -x +7 because $\frac{d}{dx} (x^3-x+7) = 3x^2 -1$. Symbolically, we write F(x) = $\int f(x)dx$. The process of finding antiderivatives is called integration. Alternative version of the Fundamental Theorem of Calculus. The Fundamental Theorem of Calculus states roughly that the integral of a function f over an interval is equal to the change of any antiderivate F (F'(x) = f(x)) between the ends of the interval, i.e., $\int_{a}^{b} f(x)dx = F(b)-F(a)=F(x) \bigg|_{a}^{b}$. The fundamental theorem of calculus can be rewritten or expressed as follows. If f is a continuously differentiable function on [a, b] with derivative f′, then $f(b)-f(a) = \int_{a}^{b} f’(x)dx$, that is, the definite integral of the rate of change of a function on [a, b] is the total change of the function itself on [a, b]. $\frac{f(b)-f(a)}{b-a} = \frac{\int_{a}^{b} f’(x)dx}{b-a}$ where the left side of this equation is just the average of the function f over the interval [a, b]. • Suppose a car moves along a straight road, and its position at time t is given by the function s(t). The average velocity of the object over the time interval [a, b] can be represented by the average rate of change of s(t) over that interval, Average_Velocity = $\frac{s(b)-s(a)}{b-a}$. Denoting the velocity as v(t), Average_Velocity = $\frac{s(b)-s(a)}{b-a} = \frac{1}{b-a} \int_{a}^{b} v(t)dt = \frac{1}{b-a} \int_{a}^{b} s’(t)dt$ (if y = s(t) represents the position function, then v = s′(t) represents the instantaneous velocity). • Suppose that water is flowing into a vessel at a rate of (1 − e^−t) cm^3/s. What is the average flow from times t = 0 to t = 2? The average rate of change or flow is $\frac{1}{2-0}\int_{0}^{2} (1-e^{-t})dt = \frac{1}{2}(t-e^{-t})\bigg|_{0}^{2} = \frac{1}{2}(2-e^{-2}+e^{-0}) ≈ 0.56.$ The Mean Value Theorem for Integrals If f is a function that is continuous on [a,b] and differentiable on (a,b), then there is at least one point c ∈ [a, b] such that $f(c) = \frac{1}{b-a}\int_{a}^{b} f(x)d(x)$ . Futhermore, since f is continuous on [a, b], by the Extreme Value Theorem, there exist m and m such that m ≤ f(x) ≤ M, and so m(b-a) ≤ $\int_{a}^{b} f(x)d(x)$ ≤ M(b -a). f is continuous over an interval [a, b] ⇒[By the Extreme Value Theorem] there exist m and m such that m ≤ f(x) ≤ M ∀ x∈ [a, b] ⇒[Comparison Theorem] m(b-a) ≤ $\int_{a}^{b} f(x)d(x)$ ≤ M(b -a) ⇒ m ≤ $ \frac{1}{b-a}\int_{a}^{b} f(x)d(x)$ ≤ M. Since $\frac{1}{b-a}\int_{a}^{b} f(x)d(x)$ ∈ [m, M], f is continuous and assumes the values m and M over [a, b] ⇒[By the Intermediate Value Theorem] there is a number c over [a, b] such that $f(c) = \frac{1}{b-a}\int_{a}^{b} f(x)d(x)$. Solved exercises 1. Suppose F’(x) = $\frac{1}{1+x}, F(0) = 1$, the MVT implies A < F(4) < B, could you compute A and B? By Intermediate Value Theorem (f is continuous on the closed interval [a, b] and differentiable on the open interval (a, b) ⇒ ∃c ∈ (a, b) such that f’(c) is equal to the function’s average rate of change over [a, b]), F(4) - F(0) = F’(c)(4 -0) = $\frac{1}{1+c}·4$ ⇒[c ∈ (0, 4)] $\frac{1}{1+4}·4 < \frac{1}{1+c}·4 < \frac{1}{1+0}·4 ⇒ \frac{4}{5} < F(4) - F(0) < 4 ⇒ \frac{4}{5} < F(4) - 1 < 4 ⇒ \frac{9}{5} < F(4) < 5$, hence A = 9/5, and B = 5. Similarly, it could be argued by the Fundamental Theorem of Calculus, F(4) -F(0) = $\int_{0}^{4} \frac{dx}{1+x}dx <[x = 0] \int_{0}^{4} dx = 4$ -this is the red rectangle, Figure 1.a.-. On the other hand, F(4) -F(0) = $\int_{0}^{4} \frac{dx}{1+x}dx >[x = 4] \int_{1}^{5} \frac{1}{5}dx = \frac{4}{5}$ -this is the yellow rectangle below, Figure 1.a.-, and we obtain the same result. 2. Find the average value of the function f(x) = 8 -2x over the interval [0, 4] and find c such that f(c) equals this value. The average value of the function over [0, 4] = $\frac{1}{4-0}\int_{0}^{4} (8-2x)dx = \frac{1}{4}·(8x-x^2)\bigg|_{0}^{4} = \frac{1}{4}·(8·4-16) = 4$. Next, we are going to set this average value to f(c) and solve for c, 8 -2c = 4 ⇒ c = 2. The Second Fundamental Theorem of Calculus The Second Fundamental Theorem of Calculus. If f is a continuous function and c is a constant, then f has a unique antiderivative A that satisfies A(c) = 0, and that antiderivative is given by the rule A(x) = $\int_{c}^{x} f(t)dt$. A’(x) = $\lim_{h \to 0} \frac{A(x+h)-A(x)}{h} = \lim_{h \to 0} \frac{\int_{c}^{x+h} f(t)dt-\int_{c}^{x} f(t)dt}{h}$ =[Integrating Definite Integral Backwards] $\lim_{h \to 0} \frac{\int_{c}^{x+h} f (t)dt+\int_{x}^{c} f(t)dt}{h}$ =[Definite Integrals on Adjacent Intervals] $\lim_{h \to 0} \frac{\int_{x}^{x+h} f(t)dt}{h}$ =🚀 Now, observe that for very small values of h, $\int_{x}^{x+h} f(t)dt ≈ f(x)·h$ by a simple left-hand approximation of the integral. $\lim_{h \to 0} \frac{\int_{x}^{x+h} f(t)dt}{h}$ =🚀 = $\lim_{h \to 0} \frac{f(x)·h}{h} = f(x)$∎ The reader should notice that A(x) solves the differential equation y’ = f, with the initial condition y(c) = 0 ($\int_{c}^{c} f(t)dt = 0$). If we differentiate A’(x) = $\lim_{\Delta x \to 0}\frac{\ Delta A}{\Delta x} = f(x)$ -By assumption, f is continuous-. Solved exercises 1. $\frac{d}{dx} \int_{1}^{x} \frac{dt}{t^2}$. Notice that A(x) = $\int_{c}^{x} \frac{dt}{t^2} = \int_{c}^{x} f(t)dt$ where f(t) = $\frac{1}{t^2}$, c = 1, and by the previous result, A’(x) = f(x), that is, $\frac{d}{dx} \int_{1}^{x} \frac{dt}{t^2} = \frac{1}{x^2}$. Let’s check our previous result. $\int_{1}^{x} \frac{dt}{t^2} = \int_{1}^{x} t^{-2} = -t^{-1}\bigg|_{1}^{x} = -\frac{1}{x}-(-1) = 1 -\frac{1}{x} = A(x)$ ⇒ $\frac{d}{dx}A(x) = \frac{d}{dx}(1 -\frac{1}{x}) = \frac{1}{x^2}$∎ ⇒ A is an antiderivative of f, and since A(1) = $\int_{1}^{1} f(t)dt = 0$, A is the only antiderivative of f for which A(1) = 2. $\frac{d}{dx} \int_{2}^{x} (cos(t) -t)dt$. Notice that A(x) = $\int_{2}^{x} (cos(t) -t)dt$ where f(x) = cos(t) -t and c = 2, and by the previous result A’(x) = f(x), that is, $\frac{d}{dx} \int_ {2}^{x} (cos(t) -t)dt = cos(x)-x.$ Let’s check our previous result. $\int_{2}^{x} (cos(t) -t)dt = sin(t) -\frac{1}{2}t^2\bigg|_{2}^{x} = sin(x)-\frac{1}{2}x^2-(sin(2)-2)= A(x)$ ⇒ $\frac{d}{dx}A(x) = cos(x)-x$ ∎ ⇒ A is an antiderivative of f, and since A(2) = $\int_{2}^{2} f(t)dt = 0$, A is the only antiderivative of f for which A(2) = 0. 3. $\frac{d}{dx} \int_{1}^{\sqrt{x}} sin(t)dt$ =[$u(x)=\sqrt{x}, F(x)=\int_{1}^{u(x)} sin(t)d(t) = G∘u, G(x)=\int_{1}^{x} sin(t)d(t)$]. By the The Second Fundamental Theorem of Calculus and the Chain Rules, $\frac{d}{dx} \int_{1}^{\sqrt{x}} sin(t)dt = sin(u(x))·\frac{du}{dx} = sin(u(x))·(\frac{1}{2}x^{\frac{-1}{2}}) = \frac{sin(\sqrt{x})}{2\sqrt{x}}$ Proof of the Fundamental Theorem of Calculus The Fundamental Theorem of Calculus. If f is continuous over the interval [a, b], and F is any antiderivate, then $\int_{a}^{b} f(x)dx = F(b)-F(a)=F(x) \bigg|_{a}^{b}$, i.e., the integral of a function f over an interval is equal to the change of any antiderivate F (F’(x) = f(x)) between the ends of the interval. Let’s F be an antiderivate, F’ = f, and let’s define G(x) = $\int_{a}^{x} f(t)dt$. By the Fundamental Theorem of Calculus, G’(x) = f(x) ⇒[F’ = f] F’(x) = G’(x) ⇒ (F -G)’ = F’ - G’ = f -f = 0 ⇒ F(x) = G(x) + c where c is a constant. Therefore, F(b) - F(a) =[F(x) = G(x) + c] (G(b) + c) - (G(a) + c) = G(b) -G(a) = $\int_{a}^{b} f(x)dx -\int_{a}^{a} f(t)dt = \int_{a}^{b} f(x)dx$ ∎ This content is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License and is based on MIT OpenCourseWare [18.01 Single Variable Calculus, Fall 2007]. 1. NPTEL-NOC IITM, Introduction to Galois Theory. 2. Algebra, Second Edition, by Michael Artin. 3. LibreTexts, Calculus. Abstract and Geometric Algebra, Abstract Algebra: Theory and Applications (Judson). 4. Field and Galois Theory, by Patrick Morandi. Springer. 5. Michael Penn, and MathMajor. 6. Contemporary Abstract Algebra, Joseph, A. Gallian. 7. YouTube’s Andrew Misseldine: Calculus. College Algebra and Abstract Algebra. 8. MIT OpenCourseWare 18.01 Single Variable Calculus, Fall 2007 and 18.02 Multivariable Calculus, Fall 2007.
{"url":"https://justtothepoint.com/calculus/integrals3/","timestamp":"2024-11-14T05:14:10Z","content_type":"text/html","content_length":"24485","record_id":"<urn:uuid:5a4f7dd5-15df-40e6-a56b-a87e7ab49442>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00424.warc.gz"}
Multi-Scale Permutation Entropy Based on Improved LMD and HMM for Rolling Bearing Diagnosis Joint Research Lab of Intelligent Perception and Control, School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, 333 Long Teng Road, Shanghai 201620, China Department of Industrial Engineering, University of Salerno, Via Giovanni Paolo II 132, 84084 Fisciano, Italy School of Information Science and Technology, East China Normal University, Shanghai 200241, China Ocean College, Zhejiang University, Hangzhou 316021, China Author to whom correspondence should be addressed. Submission received: 8 January 2017 / Revised: 3 March 2017 / Accepted: 14 April 2017 / Published: 19 April 2017 Based on the combination of improved Local Mean Decomposition (LMD), Multi-scale Permutation Entropy (MPE) and Hidden Markov Model (HMM), the fault types of bearings are diagnosed. Improved LMD is proposed based on the self-similarity of roller bearing vibration signal by extending the right and left side of the original signal to suppress its edge effect. First, the vibration signals of the rolling bearing are decomposed into several product function (PF) components by improved LMD respectively. Then, the phase space reconstruction of the PF1 is carried out by using the mutual information (MI) method and the false nearest neighbor (FNN) method to calculate the delay time and the embedding dimension, and then the scale is set to obtain the MPE of PF1. After that, the MPE features of rolling bearings are extracted. Finally, the features of MPE are used as HMM training and diagnosis. The experimental results show that the proposed method can effectively identify the different faults of the rolling bearing. 1. Introduction Rolling bearings are the most important parts of rotating machinery, which are easily damaged by load, friction and damping in the course of operation [ ]. Therefore, the feature extraction and pattern recognition of bearings diagnosis are very important. The time-frequency analysis method has been widely used in faults diagnosis, because it can provide the information in the time and frequency domain [ ]. Moreover, there are many methods of artificial intelligence detection, such as statistical processing to sense [ ], stray magnetic flux measurement [ ], and neural networks such as support vector machine (SVM) [ When the wavelet basis function of the wavelet transform (WT) [ ] is scheduled to be different, the decomposition of the time series will have great influence. When the wavelet base is selected, WT has no self-adaptability at different scales [ Empirical mode decomposition (EMD) [ ] can involve the complex multi-component signal adaptive decomposition of the sum of several intrinsic mode function (IMF) components. Further, the Hilbert transform of each IMF component is used to obtain the instantaneous frequency [ ] and instantaneous amplitude. The aim of this is to obtain the original signal integrity of the time-frequency distribution [ ], but there are still some problems, such as: over-envelope, under-envelope [ ], modal aliasing and endpoint effect in the EMD method. The Local Mean Decomposition (LMD) [ ] method realizes signal decomposition and demodulation by constructing the local average and envelope estimation functions, which effectively solves the problem of the over-envelope, under-envelope and modal aliasing in the EMD method [ ]. Compared with WT and EMD, LMD has the nature of self-adaptive feature and the LMD endpoint effect involves a certain degree of inhibition, while solving the problem of under-envelope and Researchers have found that the endpoint effect processing has been improved by LMD [ ], but the impact of the endpoint effect is still very obvious, and it cannot be proved that extreme ones use the left or the right points, so we can not explain how the envelop curve fitting function of the extreme value may be reasonable [ ]. Researchers have found that the LMD edge effects can be solved by the extreme point extension and the mirror extension [ ]. The boundary waveform matching prediction method contains the Auto-Regressive and ARMA prediction [ ]. Both sides of the waveform are just extended by these methods, but these methods do not take into account the inner rules or characteristics of signals. The new improved LMD takes inner discipline into account, the results of experiments show that the new method is more flexible than the existing method. After the improved LMD decomposition, we have to solve a major important aspect, namely, how to extract the fault information from the obtained PF components. Many studies have been carried out to solve the problem. In recent years, mechanical equipment fault diagnosis is used more than non-linear analysis methods such as sample entropy and approximate entropy [ ]. Permutation Entropy (PE) [ ] is a new non-linear quantitative description method, which can be a natural system of irregularities and a non-linear system in the quantitative numerical method presented, and has the advantages of simple calculation and strong anti-noise ability. However, similar to the traditional single-scale nonlinear parameters, PE is still in a single scale to describe the irregularity of time series. Aziz et al. [ ] proposed the concept of MPE. MPE can be used to measure the complexity of time series under different scales, and MPE is more robust than other methods [ ]. However, mechanical system signal randomness and noise interference are poor. The MPE method is used for signal processing, and the detection result is unsatisfactory. Therefore, MPE is used in combination with LMD, forming a new high-performance algorithm, which can effectively enhance the effectiveness of MPE in signal feature extraction. In this paper, the rolling bearing vibration signals were decomposed into several PF components by improved LMD. Then, the phase space reconstruction of the PF1 is carried out by using the mutual information (MI) and the false nearest neighbor (FNN) to calculate the delay time and the embedding dimension, and then the scale is set to obtain the MPE of PF1. After that, the feature vectors are extracted as the HMM and back-propagation (BP) neural network input, and then the training diagnosis and fault diagnosis are compared. 2. Improved LMD Method and Phase Space Reconstruction of MPE The process of improved LMD is summarized in four sub-processes, first we set up three points to invest a triangular waveform, and then we list all start points (it will takes some time to search the characteristic waveform), finally we get the best integration interval. After that we will study the shape error parameters to get the best extension of signal. Finally, we extend the right end through the above method, and the extended signal will be better. Figure 1 will show that the edge effect has become better, the end of the time-frequency representation changes significantly. Therefore, the experiment shows that the improved LMD has greatly improved the reduction of the border. Figure 2 , the vibration signals of the rolling bearing are decomposed into several PF components by the improved LMD respectively. Then, the phase space reconstruction of the PF1 is carried out by using the MI and the FNN to calculate the delay time and embedding dimension, and then we can set the scale to obtain the MPE of PF1. Figure 3 , MI [ ] determines the optimal delay time. $x = { x i , i = 1 , 2 , ⋯ , N }$ represents a group of signals, the probability density function of the point is $p x [ x ( i ) ]$ , the signal is mapped to the probability $y = { y j , j = 1 , 2 , ⋯ , N }$ The two groups of signals $x , y$ are measured probability density $p x y [ x ( i ) , y ( j ) ]$. In the formula: $h ( x )$ and $h ( y )$ respectively correspond to $x ( i )$ and $y ( j )$ of the entropy in the specified system and measure the average amount of information; $h ( x , y )$ is a joint information entropy. Figure 4 , FNN [ ] is used to calculate the minimum embedding dimension effective method. Time series $x ( n )$ constructs a dimension space state space vector $Y ( n ) = ( x ( n ) , x ( n + t ) , ⋯ , x ( n + ( m − 1 ) t ) )$ . Using $Y r ( n )$ to represent the $Y ( n )$ nearest neighbor, the distance between them can be calculated by the formula. When the embedded dimension changes from m to $m + 1$, a new coordinate $Y ( n )$ is added to each component of the vector $x ( n + m t )$. At this time, the distance between $Y ( n )$ and $Y r ( n ) $ is the relative increment of the distance, so they are false neighbors. In the calculation of the project $R t o l ≥ 10$, it is found that the false neighboring points can be easily identified. At the time $r = 1$, we call the nearest neighbors, by computing each nearest neighbor of the trajectory. In order to make the MPE have better fault recognition effect, it is necessary to use the MI and the FNN to optimize the delay time and the embedding dimension when calculating the MPE in Figure 2 . The MPE is calculated based on the optimized parameters. 3. Diagnosis Flow Based on HMM The characteristic index of bearing failure degree is extracted from the fault signal of the needle roller bearing with different degrees of damage, and the feature index is normalized and quantized [ ]. When the HMM is established, the sequence of observations should be a finite discrete value, and the discretized value can be used as the model training eigenvalue after quantization. Diagnosis Flow Based on HMM [ ] in Figure 5 . The Baum–Welch algorithm [ ] is used to train, adjust and optimize the parameters of the observation sequence, so that the observed sequence of probability values in the observed sequence is similar to the observed value sequence. We calculate the maximum, HMM state identification, different fault levels of the state to establish the corresponding HMM, the unknown fault state data and in turn enter the various models, calculate and compare the likelihood. The output probability of the largest model is the unknown signal fault type. It is estimated that the most probable path through the sequence is observed by the Baum–Welch algorithm. 4. Experimental Data Analysis Figure 6 , in order to validate the proposed method, the experimental data of different rolling bearings are analyzed. The experimental data are analyzed from the Case Western Reserve University. The bearing of the drive shaft end is the 6205-2RS SKF deep groove ball bearings, and the loaded 2.33 kW motor. The pitch diameter of the ball group is 40 mm and the contact angle is 0, the power of motor is 1.47 kW. Sampling frequency is 12 kHz and the relevant drive motor speed is 1750 rpm with 2 Hp load, the number of sampling points = 2048. The collected signals can be divided into four fault types including Normal, IRF, ORF, and BF. Each condition has 16 samples and a total of 64 samples. The normal state of the bearing, inner ring fault, outer ring fault and rolling body fault are tested. Due to limited space, only the improved LMD decomposition of the inner ring fault is listed in Figure 7 . We can find that improved LMD decomposition is performed for each group of time domain signal, which sets the amount of change. The multi-component AM-FM signal is decomposed into a single component AM-FM signal, the decomposition of the PF component and the signal correspond to the corresponding components, which reflect the various components of the signal in the presence of different characteristics of components. Each PF component has a physical meaning, reflecting the original signal of the real information, so we get a series of PF components. We can obtain the signal after improved LMD decomposition, which can effectively remove noise, and extract important signals. PF1 is more than 80% of the original information. PF1 can effectively reflect the original signal of the complete information characteristics. Therefore, we use the PF1 to calculate the MPE. Not only can we improve the computational efficiency, we can also reflect the original information characteristics. According to the calculation steps of the MPE, the phase space is reconstructed. Delay time and embedding dimension are two main parameters influencing the MPE algorithm. It is determined that these two parameters have certain principal randomness. Therefore, we need algorithms for phase space reconstruction. Phase space reconstruction methods mainly have the MI and the FNN to determine two parameters independently and the (C-C) joint algorithm. We found that MPE obtained by independent parameters has a better mutation detection effect. So the important PF1 component can be selected by the MI and the FNN to calculate the delay time and the embedding dimension. After that we reconstruct the phase space. We can set the MI parameter, Max_tau = 100, the program default maximum delay Part = 128 and the program default box size. The delay time is calculated by the MI, as shown in Figure 3 . On the basis of determining the delay time, the false nearest neighbor method is adopted to optimize the embedding dimension, where the maximum embedding dimension is set to 12, the criterion 1 is set to 20, criterion 2 is set to 2, false neighbor rate with the embedding dimension of the curve as shown in Figure 4 . When the embedding dimension is 4, the FNN rate is no longer reduced with the increase of the embedding dimension, so the embedding dimension is set to = 4. Using the same principle, we can reconstruct the four faults of the rolling bearing with phase space, and calculate the delay time and the embedding dimension. Due to the limited space, we only list part of the delay time and embedding dimension, and Table 1 reflects the partial delay time and the embedding dimension. After the phase space reconstruction, the delay time and embedding dimension are used to compute the MPE. We define the scale factor and calculate the MPE, select the PF1 component of the data length = 2048, the scale = 14. Therefore, we can calculate the MPE. In Figure 8 , the MPE of PF2 is compared with MPE of PF1. We can see that PF1 is better than PF2, so we choose PF1. We can see that the rolling body faults and the inner ring faults are well distinguished, but the outer ring faults cannot be very good. Therefore, there is a need to identify the model, it is HMM. After the MPE is calculated, the feature vectors are grouped into an Eigenvectors group for the input of HMM and the BP neural network, and trained and predicted for different faults. The training curve is shown in Figure 9 . HMM modeling and training, rolling body fault, inner ring fault, outer ring fault, and the normal bearing represent four kinds of hidden state, recorded as 4, respectively. The observation sequence of the model is the six multi-scale permutation entropy feature sequences extracted above. In addition, the state transition probability matrix B shows the initial probability. The distribution vector is generated by the Rand random function, and the training process is normalized. The Markov chain in HMM is described by and A. Different and A determine its shape. The features of Markov chain are as follows: it must start from the initial state, along the state sequence, the direction of transfer increases, and then it stops in the final state. The model better describes the signal in a continuous manner over time. The level of bearing failure establishes HMM. Because there are four kinds of fault state, we choose four HMM states, and the model parameter is chosen. Each state takes 11 groups as the training sample is used to generate HMM under four states, the remaining sets of feature vectors are used as test samples. The training algorithm used is the Baum–Welcm algorithm. We set the number of iterations to 100 and the convergence error to 0.001. In the model of training, the maximum likelihood estimation value increases s the iteration number increases. The training ends when the convergence error condition is satisfied. In order to prevent the model from getting into an infinite loop or training model failure, the training algorithm uses the Lloyd algorithm to scalar quantify the entropy sequence, and the scalar quantization sequence is input into the HMM. The training algorithm uses the Baum–Welch algorithm. As the number of iterations increases, the maximum log-likelihood estimate increases until convergence is reached. After training, four hidden HMM recognition models can be obtained. Figure 9 shows the HMM training curves for four states. It can be seen that all the states reach convergence at 20 iterations and converge quickly. After the training HMM models, the remaining 20 sets of entropy eigenvectors, fives states of each state, are taken as test samples. Before the test, the sample entropy sequence is scaled by Lloyd algorithm, and the scaled quantized sample entropy sequences are inputted to each state. After the completion of HMM training in four states, a state classifier is built. The Eigenvalue sequences of the remaining four states is calculated by the Forward–Backward algorithm at λ1, λ2, λ3, and λ4 Model. The log-likelihood $ln P ( O | λ )$ of this sequence is generated. Log likelihood probability $ln P ( O | λ )$ reflects the degree of similarity between feature vector and HMM, the larger the value, the closer the observed feature sequence is to the HMM of the state. The feature sequence corresponds to the output log likelihood probability value. The maximum model corresponds to the fault state. In the HMM model, each model outputs a log likelihood estimate. The log likelihood estimate represents the degree of similarity between the feature vector and each HMM. The larger the log likelihood estimate, the closer the feature vector in this state HMM model, the recognition algorithm is the Baum–Welch Algorithm. The log likelihood ratio obtained from the training model inputted to this state is larger than the log likelihood probability value obtained from the other state inputs. In Table 2 it can be seen that the higher the failure degree of the same fault type, the higher the log likelihood estimate is, and the output of natural logarithm probability estimate in each state is the maximum under this state. Under these four states, the bearing diagnosis and recognition show no category error. The HMM-like fault diagnosis model can successfully identify the fault type of bearings, recognition accuracy and stability. In order to compare improved LMD model recognition results, the experiment is different. The training results of HMM are compared with those of BP neural network, the recognition results are shown in Table 3 . We set BP some parameters as follows: trainParam_Show = 10; trainParam_Epochs = 1000; trainParam_mc = 0.9; trainParam_Lr = 0.05; trainParam_lrinc = 1.0; trainParam_Goal = 0.1. Compared with LMD, the overall accuracy of the improved LMD model is excellent. 5. Conclusions Based on the combination of improved LMD, MPE and HMM model can still achieve higher recognition when the number of exercises is small. In practical application, the new useful method can be used for training samples, so the new model is more suitable for practical fault diagnosis. Improved LMD for bearing non-stationary signal processing has a strong signal frequency domain recognition capability. The experiment shows that the improved LMD has greatly improved the reduction of the border. The improved LMD can effectively remove the excess noise and extract important information. The MI and the FNN can effectively reconstruct the space, reflecting the multi-scale permutation entropy and the mutation performance under different scales. HMM model of the various states of the bearings can be trained, and successfully diagnoses the bearing fault features. Improved LMD and HMM has a high recognition rate and it is very suitable for a large amount of information, and non-stationarity of the characteristic repeatability fault signal. This project is supported by the Shanghai Nature Science Foundation of China (Grant No. 14ZR1418500) and the Foundation of Shanghai University of Engineering Science (Grand No. k201602005). Ming Li acknowledges the supports in part by the National Natural Science Foundation of China under the Project Grant No. 61672238 and 61272402. Author Contributions Yangde Gao and Wanqing Song conceived and designed the topic and the experiments; Yangde Gao and Wanqing Song analyzed the data; Ming Li made suggestions for the paper organization, and Yangde Gao wrote the paper. Francesco Villecco made some suggestions both for the paper organization and for some improvements. Finally, Wanqing Song made the final guide to modify. All authors have read and approved the final manuscript. Conflicts of Interest The authors declare no conflict of interest. Section BF OBF IBF Normal t 1 1 1 2 m 6 4 5 7 Fault Condition Logarithm Likelihood Probabilities of the Input Sample Model λ1 λ2 λ3 λ4 Recognition Result BF −9.75854 −24.5979 $− ∞$ $− ∞$ λ1 OBF −157.594 −15.1449 $− ∞$ $− ∞$ λ2 IBF $− ∞$ −55.9402 −9.33311 −792.054 λ3 Normal $− ∞$ −24.0932 −59.6398 −5.63077 λ4 Recognition Model BF OBF IBF Normal Recognition Rate Improved LMD HMM 5 5 4 5 95.0% BP 5 4 5 4 90.0% LMD HMM 5 4 4 5 90.0% BP 5 4 4 4 85.0% © 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http:/ Share and Cite MDPI and ACS Style Gao, Y.; Villecco, F.; Li, M.; Song, W. Multi-Scale Permutation Entropy Based on Improved LMD and HMM for Rolling Bearing Diagnosis. Entropy 2017, 19, 176. https://doi.org/10.3390/e19040176 AMA Style Gao Y, Villecco F, Li M, Song W. Multi-Scale Permutation Entropy Based on Improved LMD and HMM for Rolling Bearing Diagnosis. Entropy. 2017; 19(4):176. https://doi.org/10.3390/e19040176 Chicago/Turabian Style Gao, Yangde, Francesco Villecco, Ming Li, and Wanqing Song. 2017. "Multi-Scale Permutation Entropy Based on Improved LMD and HMM for Rolling Bearing Diagnosis" Entropy 19, no. 4: 176. https://doi.org Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details Article Metrics
{"url":"https://www.mdpi.com/1099-4300/19/4/176","timestamp":"2024-11-03T16:15:03Z","content_type":"text/html","content_length":"413028","record_id":"<urn:uuid:b6753dca-d268-4f97-812f-3b4abdb53038>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00345.warc.gz"}
Long Ton to Metric Tons Converter Enter Long Ton Metric Tons โ Switch toMetric Tons to Long Ton Converter How to use this Long Ton to Metric Tons Converter ๐ ค Follow these steps to convert given weight from the units of Long Ton to the units of Metric Tons. 1. Enter the input Long Ton value in the text field. 2. The calculator converts the given Long Ton into Metric Tons in realtime โ using the conversion formula, and displays under the Metric Tons label. You do not need to click any button. If the input changes, Metric Tons value is re-calculated, just like that. 3. You may copy the resulting Metric Tons value using the Copy button. 4. To view a detailed step by step calculation of the conversion, click on the View Calculation button. 5. You can also reset the input by clicking on button present below the input field. What is the Formula to convert Long Ton to Metric Tons? The formula to convert given weight from Long Ton to Metric Tons is: Weight[(Metric Tons)] = Weight[(Long Ton)] × 1.0160469088 Substitute the given value of weight in long ton, i.e., Weight[(Long Ton)] in the above formula and simplify the right-hand side value. The resulting value is the weight in metric tons, i.e., Weight [(Metric Tons)]. Calculation will be done after you enter a valid input. Consider that a large cargo ship carries 50 long tons of luxury cars. Convert this weight from long tons to Metric Tons. The weight of cargo ship in long ton is: Weight[(Long Ton)] = 50 The formula to convert weight from long ton to metric tons is: Weight[(Metric Tons)] = Weight[(Long Ton)] × 1.0160469088 Substitute given weight of cargo ship, Weight[(Long Ton)] = 50 in the above formula. Weight[(Metric Tons)] = 50 × 1.0160469088 Weight[(Metric Tons)] = 50.8023 Final Answer: Therefore, 50 T is equal to 50.8023 t. The weight of cargo ship is 50.8023 t, in metric tons. Consider that a steel shipment for constructing a skyscraper weighs 20 long tons. Convert this weight from long tons to Metric Tons. The weight of steel shipment in long ton is: Weight[(Long Ton)] = 20 The formula to convert weight from long ton to metric tons is: Weight[(Metric Tons)] = Weight[(Long Ton)] × 1.0160469088 Substitute given weight of steel shipment, Weight[(Long Ton)] = 20 in the above formula. Weight[(Metric Tons)] = 20 × 1.0160469088 Weight[(Metric Tons)] = 20.3209 Final Answer: Therefore, 20 T is equal to 20.3209 t. The weight of steel shipment is 20.3209 t, in metric tons. Long Ton to Metric Tons Conversion Table The following table gives some of the most used conversions from Long Ton to Metric Tons. Long Ton (T) Metric Tons (t) 0.01 T 0.01016046909 t 0.1 T 0.1016 t 1 T 1.016 t 2 T 2.0321 t 3 T 3.0481 t 4 T 4.0642 t 5 T 5.0802 t 6 T 6.0963 t 7 T 7.1123 t 8 T 8.1284 t 9 T 9.1444 t 10 T 10.1605 t 20 T 20.3209 t 50 T 50.8023 t 100 T 101.6047 t 1000 T 1016.0469 t Long Ton The long ton, also known as the imperial ton, is a unit of mass used in the UK. It equals 2,240 pounds or approximately 1,016 kilograms. It is used for larger weights like cargo and ships. Metric Tons The ton is a unit of mass used in the imperial and U.S. customary systems. There are two main types of tons: the short ton (equal to 2,000 pounds) and the long ton (equal to 2,240 pounds). The ton is commonly used in the context of larger weights, such as the weight of goods, vehicles, or cargo. Frequently Asked Questions (FAQs) 1. What is the formula for converting Long Ton to Metric Tons in Weight? The formula to convert Long Ton to Metric Tons in Weight is: Long Ton * 1.0160469088 2. Is this tool free or paid? This Weight conversion tool, which converts Long Ton to Metric Tons, is completely free to use. 3. How do I convert Weight from Long Ton to Metric Tons? To convert Weight from Long Ton to Metric Tons, you can use the following formula: Long Ton * 1.0160469088 For example, if you have a value in Long Ton, you substitute that value in place of Long Ton in the above formula, and solve the mathematical expression to get the equivalent value in Metric Tons. Weight Converter Android Application We have developed an Android application that converts weight between kilograms, grams, pounds, ounces, metric tons, and stones. Click on the following button to see the application listing in Google Play Store, please install it, and it may be helpful in your Android mobile for conversions offline.
{"url":"https://convertonline.org/unit/?convert=long_ton-ton","timestamp":"2024-11-09T17:47:26Z","content_type":"text/html","content_length":"86606","record_id":"<urn:uuid:7873c110-48aa-4754-ab12-545a8d44ac46>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00099.warc.gz"}
Right Triangle Calculator - Free Online Calculators Right Triangle Calculator A Right Triangle Calculator is a valuable tool for calculating various properties of right triangles, including side lengths and angles. By inputting two sides or one side and one angle, users can determine unknown values using the Pythagorean theorem and trigonometric functions like sine, cosine, and tangent. This calculator is especially useful for students, engineers, and architects, providing quick and accurate solutions for geometric problems. Additionally, it often includes visual representations to help users better understand the relationships between the triangle’s components. Overall, the Right Triangle Calculator simplifies complex calculations and enhances comprehension of geometric concepts. Right Triangle Calculator
{"url":"https://nowcalculator.com/right-triangle-calculator/","timestamp":"2024-11-07T20:16:10Z","content_type":"text/html","content_length":"287806","record_id":"<urn:uuid:6ac78f8b-d389-4908-90f0-ecda6836fcc3>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00285.warc.gz"}
Toposes in Mondovì - CTTA Istituto Grothendieck School courses Marc Aiguier (CentraleSupélec, Université Paris-Saclay) Logic, categories and topos: from category theory to categorical logic Mathematical logic as the foundation of mathematics was first studied in the set-theoretic framework. Following Grothendieck’s works on topos in the late 1950s, and the fact that the latter possessed properties that brought them closer to sets, logicians under the leadership of Lawvere proposed to extend the semantics of first-order logic and its fragments (higher-order logic, geometric logic, etc.) to category theory and among other things to a finitary axiomatization of topos, elementary topos. The outline of this course will be as follows: • Syntax of logics (signature, terms, formulas, sequent and theory) • Categorical semantics (subobject, structures and evaluation of terms) • Satisfaction of FO formulas in Heyting categories □ Lawere’s hyperdoctrines □ Elementary toposes □ Interpretation of sequents □ Kripke-Joyal semantics • Interpretation of HOL in elementary toposes • Geometric logic □ Geometric category □ Grothendieck toposes • Inference systems (rules, correctness and completeness) □ Syntactic category and universal model □ Representable functor T-Mod • Classifying topos Olivia Caramello (University of Insubria and Grothendieck Institute) Relative toposes for the working mathematician Relativity techniques for schemes have played a key role in Grothendieck’s refoundation of algebraic geometry. We shall give an introduction to the relativity techniques for toposes, formulated in the language of stacks and fibrations, that we have been developing since 2020. After recalling the necessary preliminaries, we will discuss the problem of representing geometric morphisms both in terms of morphisms and comorphisms of sites; this will lead to the notion of “relative site” and the study of the associated “relative toposes”. We will show that the usual presheaf-bundle adjunction for topological spaces generalizes to arbitrary sites, by replacing continuous maps to the space with relative toposes (with respect to the topos of sheaves on the given site). We shall then present a generalisation of Diaconescu’s equivalence for relative toposes, formulated in the language of relative sites. Lastly, we will introduce the concept of “existential fibred site” (a broad, site-theoretic generalisation of the notion of (hyper)doctrine), and that of “existential topos” of such a site: these notions notably allow us to develop relative topos theory in a way which naturally generalizes the construction of toposes of sheaves on locales, providing a unified framework for investigating the connections between Grothendieck toposes as built from sites and elementary toposes as built from triposes. We shall discuss several examples and applications of the notions and results presented in the lectures, of algebraic, geometric and logical nature. Part of the material presented in the course comes from joint works with R. Zanfa, L. Bartoli and R. Lamagna. Alain Connes (IHES) Knots, Primes, and the Scaling Site The topic of my course will focus on the interrelation between a very specific topos, the scaling site, and the well-known analogy going back to 1963 between knots and prime numbers. The topos, which is the scaling site, is simple to define as the semi-direct product of the half-line of positive or zero real numbers by the action of the positive integers, by multiplication. It turned out in 2014 (in joint work with C. Consani) that the points of this topos were identified with a space which had occurred in 1996 in non-commutative geometry related to the Riemann zeta function, and which is obtained by starting with the adele class space and dividing it by the action of the maximal compact subgroup of the Idele class group. The main point of the class will be to understand the use of topos theory and of non-commutative geometry in order to understand the scaling site and the finite abelian covers of the scaling site associated to finite abelian extensions of the field of rational numbers. Grothendieck, by his theory of the etale fundamental group, has extended Galois theory from the context of fields to the context of schemes. The main new result which I will present (a joint work with C. Consani) is that the scaling site and the adele class space allow one to extend the class field theory isomorphisms, which usually relate Galois groups with groups of adelic nature, to the situation where one is no longer starting with a Galois group, but one is starting with a scheme intimately related to the field of rational numbers, and one associates to this scheme its class field theory counterpart, which can be seen either at the adelic level or at the topos level. Denis-Charles Cisinski (University of Regensburg) Synthetic ∞-category theory and elementary ∞-topoi We will propose a formal language of category theory, independent of set-theoretic foundations. Formally, this language is a variation of type theory, but, as for set theory, there is a “naïve” version in natural language (with which we actually work) and this is what we will introduce. Although this language looks like (and arguably can be) the one of ordinary category theory, a suitable version of Voevodsky’s univalence axiom will turn it into the language of infinity-categories as developed by Joyal and Lurie, in a version that is expressive enough to prove all the basic results of (higher) category theory – including the theory of Grothendieck topoi. This language also expresses (higher) category theory internally to any (higher) topos. The goal of this lecture series is to introduce such a language and to explain why it is in fact a way to define what is an elementary topos in the setting of higher category theory. Dennis Gaitsgory (Max Planck Institute for Mathematics) 2-Fourier-Mukai transform and geometric Langlands for non-constant group-schemes Let $X$be a curve, and $G$a semispmple group. Let $Bun_G$be the moduli stack of $G$-bundles on $X$. Let $G^\vee$be the Langlands-dual of $G$, and let $LS_{G^\vee}$ be the stack of $G^\vee$-local systems on $X$. The geometric Langlands conjecture (now a theorem) says that there is an equivalence (*) $Dmod(Bun_G)\simeq QCoh(LS_{G^\vee})$. (We suppress the difference between $QCoh$ and $IndCoh$ as it will play no role in what follows.) Now, one can twist the two sides of (*) as follows: (a) Using the short exact sequence $1\to Z_G \to G\to G_{ad}\to 1$, given a gerbe $\mathcal{G}$ on $X$with respect to $Z_G$, we can form a twist $G_{\mathcal{G}}$ of $G$ as a group-scheme over $X$, and consider the category $Dmod(Bun_{G_{\mathcal{G}}})$ (b) Using the short exact sequence $1\to Z_{G^\vee}\to G^\vee\to (G^\vee)_{ad}\to 1$, given a gerbe $\mathcal{G}^\vee$on $X$ with respect to $Z_{G^\vee}$, we can form a twist $G^\vee_{\mathcal{G}^\vee}$of $G^\vee$as a group-scheme over $X$, and consider the category $QCoh(LS_{G^\vee_ It will turn out that the above two operations are dual to each other with respect to the operation of 2-Fourier-Mukai transform. We will explore this relationship and its consequences for Langlands correspondence. Laurent Lafforgue (Huawei) Geometry and logic of subtoposes After reviewing the multiple roles of toposes – as generalized topological spaces, as universal invariants, as pastiches of the category of sets and as incarnations of the semantics of first-order theories -, we shall recall the definition of the notion of subtopos and its double expression in terms of Grothendieck topologies and in terms of first-order logic. We shall stress the consequence of this double expression for translating first-order provability problems into problems of generation of Grothendieck topologies, and we shall introduce the natural geometric inner and outer operations on subtoposes. After introducting these themes, we shall give a new presentation – based on some very general abstract nonsense – of the duality of Grothendieck topologies and subtoposes, and of the duality of topologies and closedness properties of subpresheaves. Then we shall present two different general formulas expressing the Grothendieck topology generated by any given family of sieves or of covering families of morphisms. We shall also make more precise the constructive processes which allow to translate provability problems into topology generation problems. Lastly, we shall study the inner geometric operations on subtoposes – union, intersection, difference – and the outer adjoint operations of push-forward and pull-back by topos morphisms. We shall prove that pull-back operations always respect not only arbitrary intersections but also finite unions of subtoposes, and that pull-backs by “locally connected” morphisms even respect arbitrary unions of subtoposes. Part of the material of the lecture course is classical – borrowed from SGA4, from O. Caramello’s book “Theories, Sites, Toposes” and from other references – and part is new, borrowed from a joint paper coauthored with O. C. to appear soon under the title “Engendrement de topologies, démontrabilité et opérations sur les sous-topos”. Michael Robinson (American University) Practical systems modeling in categories using sheaves Modeling practical systems using category theory (and especially topoi) can be daunting! There is plenty of expressive power, but in a sense it is too much. Fortunately, most practical systems also have a notion of topology, which provides strong modeling constraints. The theory of sheaves forms a functorial bridge between the topology of a model and its associated data. Famously, categories of sheaves are topoi. Therefore, although the job of modeling with sheaves is easier than purely with categories, it does not come at the expense of expressivity. Over the past decade or so, the topological data modeling community has developed tools that allow one to easily and effectively build sheaf models for common system models. This summer school session will explain how these work, both theoretically and practically. It is also important to note that practical systems also have to deal with noise, errors, and uncertainty. Fortunately, sheaves are topological in their organization and in their representation of data. With a little forethought, sheaves can be used to handle the vagaries of practical systems. Moreover, in many scientific and engineering settings, there is geometric information. This allows one to measure how closely aligned the model posited by a sheaf and experimentally collected data are. The result is a statistical interpretation of data, models, and topology that has a practical algorithmic implementation… and this too is functorial! This summer school session will include software demonstrations to introduce participants to the use of these tools. Isar Stubbe (Université du Littoral-Côte d’Opale) Quantaloid-enriched categories for sheaf theory A sheaf F on a locale L is commonly defined as a contravariant Set-valued functor on L that satisfies the gluing condition. Together with natural transformations, these sheaves form the objects and morphisms of the localic topos Sh(L). While every localic topos is a Grothendieck topos, the converse does not hold—there exist Grothendieck toposes that are not localic. For any two elements (or sections) x and y in such a sheaf F (meaning that x is in Fu, and y is in Fv, for some u,v in L), we can measure the extent to which x equals y by computing the supremum of all w below both u and v on which the restrictions of x and y are equal (in the set Fw). This L-valued map on pairs of elements of F plays the role of a characteristic function for equality. In fact, the notion of sheaf can be reformulated in terms of such an L-valued map; and (together with an appropriate notion of morphism) these ‘L-sets’ form a category equivalent to Sh(L). This latter formulation puts sheaf theory in the realm of many-valued logic, or more specifically, of quantaloid-enriched categories – and this will be the central theme of our lectures. Concretely, we shall first define quantales and quantaloids, explore some key universal constructions on these, and discuss several examples (noting in particular that locales are precisely ‘cartesian’ quantales). Then we will explain the fundamentals of quantaloid-enriched category theory, and show its flexibility and applicability through various examples. Finally we shall indicate how every Grothendieck topos Sh(C,J) is equivalent to a category of ‘Q-sets’ for an appropriate quantale Q, thereby showing that “every Grothendieck topos is quantalic”. Invited speakers Joseph Bernstein (Tel Aviv University) Groups, Groupoids, Stacks and Representation Theory In my talk I would like to introduce a new approach to Representation Theory. Let G be an abstract group and k some field. A representation of the group G over the field k is usually defined as a pair (π, V), where V is a vector space over k and π is a morphism from G to Aut One of basic problems in Representation Theory is the study of the category Rep(G) of such representations. In my talk I will explain that there is another natural way to describe this category. Namely, the category Rep(G) is naturally equivalent to the category of sheaves Sh(B(G)) on some “geometric” object – the basic groupoid BG of the group G. Thus we have two equivalent definitions of representations – standard one and categorical definition in terms of groupoids. I will explain that more sophisticated categorical description is more “correct” one. For example, it gives a more adequate description of the category Rep(G) in cases when we have some external symmetries. The gap between these two definitions becomes much more profound when we move from the category of sets to other categories (more precisely – sites). In this case the role of groupoids are played by stacks. So I propose to define the category of representations of a group G as the category of sheaves on the basic stack BG. I will discuss how these things play out in the important case when G is an algebraic group over a local field. My talk will partially follow my paper in arXiv:1410.0435. Relation between groups and groupoids is discussed in a paper by R. Brown “From Groups to Groupoids: a Brief Survey”, Bull. London Math. Soc. 19 (1987) 113-134. Paolo Giordano (University of Vienna) The Grothendieck topos of generalized smooth functions The need to describe abrupt changes or response of nonlinear systems to impulsive stimuli is ubiquitous in applications. Also within mathematics, L. Hörmander stated: “In differential calculus one encounters immediately the unpleasant fact that not every function is differentiable. The purpose of distribution theory is to remedy this flaw; indeed, the space of distributions is essentially the smallest extension of the space of continuous functions where differentiability is always well defined”. We first describe the universal property of the space of distributions, but then we underscore the main deficiencies of this theory: we cannot evaluate a distribution at a point, we cannot make non-linear operations, let alone composition, we do not have a good integration theory, etc. We then present generalized smooth functions (GSF) theory, a nonlinear theory of generalized functions (GF) as used by physicists and engineers, where GF are ordinary set-theoretical maps defined on and taking values in a non-Archimedean ring extending the real field (this problem has been faced e.g. by: Schwartz, Lojasiewicz, Laugwitz, Schmieden, Egorov, Robinson, Colombeau, Rosinger, Levi-Civita, Keisler, Connes, etc.); GSF are closed with respect to composition so that nonlinear operations are possible; these operations coincide with the usual ones for smooth functions; all classical theorems of differential and integral calculus hold; we have several types of sheaf properties, and GSF indeed form a Grothendieck topos; we have a full theory of ODE, and general existence theorems for nonlinear singular PDE, e.g. the Picard-Lindelöf theorem for PDE; every Cauchy problem with a smooth PDE is Hadamard well-posed; we can generalize the classical Fourier method also to non-tempered GF (this problem has been faced e.g. by Gelfand, Sobolev); we have several applications in the calculus of variation with singular Lagrangians, elastoplasticity, general relativity, quantum mechanics, singular optics, impact mechanics (this problem has been faced by J. Marsden). We close by presenting a project in collaboration with several Japanese universities about how to apply GSF theory to have GF in diffeological spaces and hence to make homotopy theory where continuous functions can be treated as smooth functions, or to try to replicate synthetic differential geometry in the Grothendieck topos using nilpotent infinitesimals for this type of GF. Peter Haine (University of California, Berkeley) Reconstructing schemes from their étale topoi In Grothendieck’s 1983 letter to Faltings that initiated the study of anabelian geometry, he conjectured that a large class of schemes can be reconstructed from their étale topoi. In this talk, I’ll discuss joint work with Magnus Carlson and Sebastian Wolf, generalizing work of Voevodsky, that proves Grothendieck’s conjecture. Specifically, we show that over a finitely generated field k of characteristic 0, seminormal finite type k-schemes can be reconstructed from their étale topoi. Over a finitely generated field k of positive characteristic and transcendence degree ≥ 1, we show that perfections of finite type k-schemes can be reconstructed from their étale topoi. Combined with joint work with Barwick and Glasman, and work of Makkai and Lurie on strong conceptual completeness, we deduce that such schemes can also be reconstructed from two different condensed categories of points of their étale topoi. Our talk will focus on the topos-theoretic aspects of these results. Matthias Hutzler (University of Gothenburg) Projective Space and Line Bundles in Synthetic Algebraic Geometry Synthetic algebraic geometry is the study of schemes using an internal language of the Zariski topos. More precisely, Homotopy Type Theory (HoTT) is interpreted in a higher-topos variant of the Zariski topos in order to have a powerful language that can talk about higher homotopical objects just as easily as set-level objects. From this internal perspective, schemes such as for example projective space appear simply as certain h-sets without any added structure. In the talk we present a synthetic version of the classical classification result for line bundles on projective space. The language of HoTT allows us to give a stronger variant of the classical statement, describing the 1-type of line bundles instead of its set-truncation, the Picard group. This is used to give a proof that requires nontrivial algebraic arguments only for the case of the projective line, and derives the general case by an interpolation argument. Asgar Jamneshan (Koç University) Some Applications of Toposes of Measure-Theoretic Sheaves We construct toposes of sheaves on measure spaces and highlight the usefulness of interpreting certain structures from classical measure theory and functional analysis, combined with a Boolean internal logic, in applications to ergodic structure theory and vector duality. Maxim Kontsevich (IHES) What is the spectrum of quantum algebra? For quantum algebras, like e.g. algebras of polynomial differential operators or q-difference operators, the “minimal” non-trivial modules are holonomic ones. These objects are not point-like, and correspond roughly to vector bundles on Lagrangian subvarieties in the semi-classical limit. I’ll talk about various approaches to supports: 1) via behavior at infinity, 2) via reduction modulo large primes, 3) via multiplicative semi-norms in the non-archimedean case. Mariano Hugo Luiz (University of São Paulo) From quantales to a Grothendieck monoidal topology: Towards a closed monoidal generalization of topos In this talk, we will present some recent developments associated with some PhD theses in IME-USP (Institute of Mathematics and Statistics, University of São Paulo, Brazil) on categories of sheaves over quantales and categories of quantale valued sets, returning to a theme of studies involving logic and categories carried out at IME-USP in the latter half of the 1990, but now from a new perspective: considering semicartesian and commutative quantales, as non-idempotent generalizations of locales (= complete Heyting algebras). We will list some properties of the (monoidal) categories obtained, indicating some similarities and differences with the Grothendieck topos. The main goal of these efforts is to develop a closed monoidal but not cartesian closed generalization of the notion of elementary topos, in order to cover some mathematical situations (including generalizations of metric spaces), to enable an axiomatic study of these categories, and a general definition of their internal logic, which shows clues of being some form of linear logic. A future goal is to establish a precise relationship between the present approach and the enriched category approach to sheaves over quantales (and quantaloids) developed by I. Stubbe. Axel Osmond (Grothendieck Institute) Morphisms and comorphisms of sites: double-categorical and profunctorial aspects Geometric morphisms can be induced either from morphisms or comorphisms of sites, respectively in a contravariant and in a covariant way; the first are characterized through a cover-preservation property, the second through a cover-reflection property. As both define a relevant notion of 1-cells between sites, one may ask two questions: • is there a proper way to mix them altogether in a single categorical structure on sites, and if so, does it help understanding the reason for which we have those twin classes of functors rather than a single one? • is it possible to subsume them into a single notion jointly generalizing the cover-preservation and cover-reflection into a single condition? In this talk, based on an ongoing work with Olivia Caramello, we will try to address those two questions. In a first part, we will explain how morphisms and comorphisms, though they do not compose with each other, can be arranged as the horizontal and vertical 1-cells of a double-category of sites, and how the sheaf-topos construction defines a double-functor to the quintet double-category of topoi. We will also discuss some companion and conjoint constructions in this setting, as well as a link between this double-category of sites and the double-category of co-algebras, lax and colax morphisms for a 2-comonad. In a second part, we will try to answer to the second question through notions of continuity for distributors (a.k.a. profunctors); building on Bénabou theory of flat distributors and refining a previous definition of Johnstone and Wraith, we prove an equivalence between continuous distributors between sites and geometric morphisms between the corresponding sheaf topoi, and discuss how this notion relates respectively to morphisms and comorphisms of sites. Hans Riess (Duke University) John Baez and others have noted a compelling analogy between adjoint functors in category theory and adjoints of linear operators in Hilbert spaces. This analogy is particularly striking when considering enriched adjunctions, where the hom mimics an inner product. On the other hand, Hodge theory, which bridges PDEs with geometry and topology, fundamentally relies on the existence of an adjoint operator within the de Rham complex. In this talk, we explore ongoing efforts to categorify the Hodge Laplacian, focusing on the connection Laplacian—a special case involving parallel transport between tangent spaces of a manifold. We propose a framework for parallel transport and a connection Laplacian within the setting of presheaves on preorder into the category of V-enriched categories. Finally, we outline preliminary examples of this theory, with potential applications to logic, formal concept analysis, and tropical geometry. Michael Shulman (University of San Diego) Internal languages of diagrams of toposes Internal languages are a powerful tool for studying toposes, but traditionally they can only be applied to a single topos at a time, whereas frequently we are interested in a number of toposes related by a diagram of geometric morphisms. The collection of internal languages of the toposes in such a diagram, together with syntactic operations relating them corresponding to the direct and inverse image functors of the geometric morphisms and transformations, form a “modal” type theory, with the functor operations known as “modalities”. The first general modal type theories, applicable to a diagram of arbitrary shape, have recently been formulated by Gratzer, Kavvos, Nuyts, and Birkedal, but interpreting these theories in a diagram of toposes appears to require that all the geometric morphisms be essential. I will show how this requirement can be avoided, by generalizing the construction of a “fibration of sites” presenting a single geometric morphism to a presentation of an arbitrary diagram of such. Contributed talks Thiago Alexandre (University of São Paulo and IMJ, Paris) The theory of derivators was originally developed by Grothendieck with high inspiration in topos cohomology. In a letter sent to Thomason, where he explains the main ideas and motivations guiding the formal reaosining of derivators, Grothendieck also remarks that those are Morita-invariant. This means that if two small categories X and Y have equivalent topoi of presheaves, then the categories D (X) and D(Y) are also equivalent for any derivator D. This observation suggests that it may be possible to extend any derivator D to the entire 2-category of topoi and geometric morphisms between them. Grothendieck speculates that such an extension is always possible and essentially unique. In this case, every derivator D defined over small categories would be coming from a derivator D’ defined over topoi via natural equivalences of categories of the form D(X) = D’(X^), for X varying through small categories and X^ denoting the category of presheaves over X. However, despite these considerations, a theory of derivators over topoi has not yet been developed. To address this gap, I am currently developing a theory of topological derivators. These derivators, defined on the 2-category of topoi, aim to provide answers to Grothendieck’s conjecture. Beyond applications in geometry, the theory of topological derivators also offers a potential framework to connect categorical logic and homotopical algebra. In my talk, I would like to present the theory of topological derivators and some of its main results until now, including examples, some techiniques to construct topological derivators, and how topological derivators are related with the homotopy theory of topoi. Igor Baković (Co)Fibrations, (pseudo)distributive laws and (quasi)Toposes Mainly motivated by the symmetric monad in toposes which classifies Lawvere’s distributions, Bunge and Funk developed the theory of admissible Kock-Zoberlein doctrines or (co)lax idempotent 2-monads. Their first main contribution is a characterization of the Eilenberg-Moore 2-category of algebras of an admissible 2-monad in terms of (co)completeness. Their second major contribution is a description of the Kleisi 2-category by means of its bifibrations which are defined by a certain bicomma object condition and the corresponding comprehensive factorization for those 1-cells which have an admissible domain. However, besides some less known and exotic example from computer science theory, the symmetric topos was their only major example of an admissible 2-monad. In my talk, I prove that one of the most fundamental 2-monads – associated (split) fibration – is admissible. Then I show how the known case of a 2-monad whose underlying 2-functor is defined over a fixed base category extends to a 2-functor on the 2-category whose objects are functors and 1-cells are colax squares – those which commute up to an upwards pointing natural transformations. If one wants to extend the action of a 2-monad on the 2-category of lax squares – those which commute up to a downward pointing natural transformations – one needs to impose the existence of pullbacks in codomains of objects which we treat as generalized fibrations following Bénabou. Léo Bartoli (Grothendieck Institute and ETH Zurich) Local Fibrations and Relative Diaconescu’s Theorem To develop relative topos theory, that is, topos theory over an arbitrary base topos, the language of stacks –or more generally, indexed categories – has proven to be very efficient. The notion of a category indexed by a base site constitutes the relative analogue of the concept of a category, and by endowing the associated fibrations with certain Grothendieck topologies, we arrive at the notion of a relative site. In this presentation of joint work with Prof. Olivia Caramello, we first introduce a localization of the notion of fibration with respect to certain topologies (specializing to ordinary fibrations when these topologies are trivial). Indeed, it is within this broader framework that we are able to obtain a comprehensive understanding of which morphisms of sites induce morphisms of relative toposes. As in the “absolute” case, we identify a relative extension of a morphism of sites along a canonical functor (a relative analogue of the Yoneda embedding), which preserves finite limits fiberwise precisely when the functor in question induces a morphism of relative toposes. The notion of local fibrations allows for an elegant characterization of those functors that induce morphisms of relative toposes: they are precisely the morphisms of sites that also are morphisms of local fibrations. These considerations naturally lead to the concept of a relatively flat functor, which in turn allows us to obtain a relative version of Diaconescu’s theorem for local fibrations (and hence also for ordinary fibrations), characterizing the relative geometric morphisms towards a relative sheaf topos in terms of relatively flat functors. Claudio Fontanari (University of Trento) Moduli spaces of curves and topos theory According to Grothendieck, the moduli spaces of curves are among mathematical objects “les plus beaux, les plus fascinants que j’aie rencontrés” and topos theory is “fait sur mesure pour exprimer ce genre de situation” much better than the language of algebraic stacks introduced by Deligne and Mumford. In my short communication I would like to pose the following open question: “Consider the moduli spaces of curves as Grothendieck toposes. May this (more natural, at least according to Grothendieck) approach have any interesting consequence on our understanding of the geometry of these beautiful spaces?”. I am also going to propose a tentative answer, by referring to Kontsevich’s “hidden smoothness philosophy” and to derived algebraic geometry. Ryuya Hora (University of Tokyo) The colimit of all monomorphisms classifies hyperconnected geometric morphisms One of the most fundamental theorems in topos theory is the correspondence theorem between subtopoi and Lawvere-Tierney topology. This theorem allows us to reduce the classification problem of subtopoi to the classification problem of idempotent internal semilattice homomorphisms on the subobject classifier. What about the “dual case” of quotient topoi (i.e., connected geometric morphisms)? This problem is the first of Lawvere’s open problems in topos theory: […] Is there a Grothendieck topos for which the number of these quotients is not small? At the other extreme, could they be parameterized internally, as subtopoi are? In this presentation, I will discuss a partial answer to this question by presenting a classification theorem for hyperconnected geometric morphisms, which are quotient topoi satisfying an additional condition. Our theorem classifies hyperconnected geometric morphisms using internal semilattice homomorphisms, similar to the case of subtopoi. We define a semilattice called the local state classifier (LSC) as the “colimit of all monomorphisms” and prove that semilattice homomorphisms LSC to the subobject classifier are in one-to-one correspondence with hyperconnected geometric The interesting aspect of this result lies in the fact that actual computations can be performed. Although the definition of a local state classifier is transcendental, it can be concretely constructed in the case of a Grothendieck topos. For example, for a presheaf topos over a small category, the local state classifier is the presheaf consisting of all quotient objects of the representable functors, which is dual to the construction of the subobject classifier! This talk is based on the paper “Internal Parameterization of Hyperconnected Quotients”, arXiv:2302.06851 Giuseppe Leoncini (Masaryk University & University of Milano) Homotopy colimits over a topos Starting from a 1-categorical base V which is not assumed endowed with a choice of model structure (or any kind of homotopical structure), we define homotopy colimits enriched in V in such a way that: (i) for V = Set, we retrieve the classical theory of homotopy colimits, and (ii) restricting to isomorphisms as weak equivalences, we retrieve ordinary and enriched 1-colimits. We construct the free homotopy V-cocompletion of a small V-category in such a way that it satisfies the expected universal property. For V = Set, we retrieve Dugger’s construction of the universal homotopy theory on a small category C. We define the homotopy theory of internal infinity-groupoids in V as the homotopy V-enriched cocompletion of a point, and argue that V-enriched homotopy colimits correspond to weighted colimits in infinity-categories enriched in internal infinity-groupoids in V, thus providing a convenient model to perform computations. Again, taking V = Set, this retrieves the classical notions for ordinary infinity-categories. We compare our approach with some previous definitions of enriched homotopy colimits, such as those given by Shulman, Lack & Rosicky, and Vokrinek, and we show that, when the latter are defined and well behaved, they coincide with ours up to Quillen homotopy. The theory behaves well when the base of enrichment is a Grothendieck 1-topos. As an application, we give a new proof of the following fact (conjectured by Hill and recently proven by Shah): we show that the so-called genuine (or fine) homotopy theory of G-spaces is the G-equivariant homotopy cocompletion of a point. Marco Panzeri (University of Insubria) 2-categorical constructions on classifying toposes In this talk, based on joint work with Olivia Caramello, we provide logical descriptions of a number of fundamental constructions in the 2-category of Grothendieck toposes; more specifically, we describe geometric theories classified by weighted limits of classifying toposes and morphisms between them induced by interpretations, and apply this result to obtain logical descriptions of fibred products, comma objects, and small limit of toposes. Iosif Petrakis (University of Verona) Toposes with dependent and codependent arrows We introduce dependent arrows as categorical generalisations of dependent functions over type-familes (set-families) in Martin-Löf Type Theory (Bishop Set Theory). Our categorical description of dependency is an extension of earlier work of Pitts. Namely, we describe the type-categories of Pitts as categories with family-arrows and Sigma-objects, and we introduce categories with dependent arrows, or dep-categories, independently from Sigma-objects. The existence of dependent arrows in a dep-category affects the definition of the corresponding Sigma-structure, as the second-projections, which are appropriate dependent arrows, are crucially involved in the definition of the new Sigma-objects. All concepts and results concerning the above categories can be Pitts described only the canonical family arrows on a topos. Here we describe the corresponding canonical Sigma-structure and the canonical dep-structure on a topos, together with their 2-categorical versions (this part of our presentation is joint work with Yannick Ehrhardt). All these concepts and all related results on toposes can be dualised, and the canonical cofamily arrows, coSigma-objects, and codependent arrows on toposes emerge in a natural manner. Elio Pivet (Grothendieck Institute and ETH Zurich) The theory of categories enriched in a monoidal category is well-known, and it has been shown that it can be extended to a theory of enrichment in bicategories. Quantaloids are a particular case of bicategories, and it has been shown that the theory of categories enriched in a quantaloid generalizes that of sheaves over a locale, and even recovers the case of sheaves over a site as a particular We will present the general definitions for sheaves enriched in a bicategory, and extend some definitions from quantaloid enrichment theory to the general case of a fixed bicategory. This will enable us to define a particular category of enriched categories which plays the role of the category of sheaves over the base bicategory. This theory recovers simultaneously the case of sheaves over quantaloids and that of sheaves over sites. In general, such categories are not Grothendieck toposes (it was already the case for general quantaloids), but they can be seen as reflexive subcategories of categories of presheaves in a suitable sense. As these objects are more general than toposes, we call them B-toposes (B denoting the generic name for a bicategory). This is joint work with Olivia Caramello. Fabian Ruch (Göteborgs Universitet) Logics as Kan Injectivity Classes of Toposes The aim of this talk is to define the notion of (fragment of geometric) logic using the technology of Kan injectivity introduced by Di Liberti, Lobbia and Sousa. We shall show that many relevant fragments of geometric logic can be described as injectivity classes, thus offering a good framework to describe various logics under this pattern. We first recall the right Kan injectivity of algebraic toposes with respect to all geometric morphisms. We then recall the right Kan injectivity of coherent toposes with respect to flat geometric morphisms. Lastly, we present right Kan injectivity properties of localic toposes, locally decidable toposes, regular toposes and disjunctive toposes. Ivan Tomasic (Queen Mary University of London) Galois theory of differential schemes Since 1883, Picard-Vessiot theory had been developed as the Galois theory of differential field extensions associated to linear differential equations. Inspired by categorical Galois theory of Janelidze, and by using novel methods of precategorical descent applied to algebraic-geometric situations related to differential schemes viewed as precategory actions, we develop a Galois theory that applies to morphisms of differential schemes, and vastly generalises the linear Picard-Vessiot theory, as well as the strongly normal theory of Davide Trotta (University of Padua) Presheaves, Sheaves and Sheafification via triposes The notion of tripos was originally introduced by Hyland, Johnstone and Pitts to explain from an abstract perspective in which sense localic sheaf toposes and Hyland’s realizability toposes are instances of the same construction. The main purpose of this work is to further investigate the common structures of these classes of toposes from a more geometric point of view. In particular, we first introduce an exact category of “abstract presheaves” for (arbitrary-based) triposes by combining the tripos-to-topos construction with the full existential completion. The given name is motivated by the fact that abstract presheaves coincide with localic presheaves in the case of localic triposes. Then, we call ∃-sheaf triposes those triposes whose abstract presheaves category is a topos, and we prove that every Set-based tripos is a ∃-sheaf tripos. Furthermore, we show that the sheafification between a localic topos and its presheaf topos can be generalized to an “abstract sheafication adjunction” between a ∃-sheaf triposes and its full existential completion. In particular, we conclude that any tripos-to-topos construction of a Set-based tripos can be seen as the category of j-sheaves for the Lawvere-Tierney topology j induced by an abstract sheafication adjunction. This talk is based on joint work with Maria Emilia Maietti. Joshua Wrigley (Queen Mary University of London) A groupoidal classification of theories via topos theory There has been renewed interest within the model theory community in the question of whether a logical theory can be characterised by the symmetries of (a set of) its models, endowed with some further topological data. The first appearance in the literature of this kind of result is a paper of Ahlbrandt and Ziegler (“Quasi finitely axiomatizable totally categorical theories,” Annals of Pure and Applied Logic, vol. 30, no. 1, pp. 63–82, 1986), where it is shown that a countably categorical theory is characterised, up to bi-interpretability, by the topological automorphism group of its unique countable model. Recently, Ben Yaacov (“Reconstruction of non-ℵ[0]-categorical theories,” Journal of Symbolic Logic, vol. 87, no. 1, pp. 159–187, 2022) has shown that the countable categoricity assumption can be dropped if we instead work with topological groupoids (though his groupoids are not groupoids of models). We will present a topos-theoretic approach to this problem by associating topoi to both logical and topological/algebraic data. Each logical theory has a classifying topos, which characterises the theory up to Morita equivalence (a mild generalisation of bi-interpretability, see McEldowney, “On Morita equivalence and interpretability,” Review of Symbolic Logic, vol. 13, no. 2, pp. 388–415, 2020), while a topological groupoid generates a topos of equivariant sheaves. We will identify when the topos of sheaves on an open topological groupoid of models classifies a logical theory, and when two such groupoids have equivalent topoi of sheaves, extending the classical result of Ahlbrandt and Ziegler. Fernando Yamauti (University of São Paulo and University of Regensburg) Some Properties of some Homotopy Theories of Topoi Homotopy theory in a topos is a relatively well developed topic be it from the point of view of test topoi or the homotopy theory internal to a (higher) topos. Nevertheless, the homotopy theory of the category of (higher) topoi itself was not well explored in the literature. Following the historical developments since Grothendieck’s Galois theory, one is tempted to use shape theory as the main homotopy theoretical invariant attached to a topos. Still, much of shape theory is only known after profinite completion, after which much of the theory is completely simplified. Coming from a completely different side, one is yet also confronted before another possible homotopy theory. For every choice of an interval object inside the category of topoi, one can define suitable notions of n-connectedness taking into account the possible lack of points in a topos. Natural candidates for intervals are the Sierpiński interval and the ordinary topological interval. In this talk, I intend to talk about ongoing work on the relation between those two flavours of homotopy theory. If time permits, I shall also comment on related work in progress towards the extension of Joyal-Tierney’s paradigm of “topos as localic stacks” to higher topoi.
{"url":"https://ctta.igrothendieck.org/","timestamp":"2024-11-11T18:26:40Z","content_type":"text/html","content_length":"291600","record_id":"<urn:uuid:95c31952-5b5a-4b6e-9a56-2da992f94089>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00685.warc.gz"}
Wolfram|Alpha Examples: Refraction Examples for Refraction is the change in direction of light rays due to a change in the medium they pass through. Wolfram|Alpha provides a wide range of formulas for refraction, including Snell's law, and can be used to compute refractive indices or the position of rainbows. Do computations using the thin lens equation or Fresnel's equation. Compute the effects of refraction on light in lenses, prisms and raindrops. Compute refractions using Snell's law: Do a Fresnel's law computation: Compute the refraction of light in a prism: Do computations using the thin lens equation: Calculate the height of a rainbow: Compute the focal length of a lens: Refractive Indices Compute the refractive index for a wide variety of substances. Calculate the refractive index of humid air: Determine the refractive index of water: Look up the refractive index of a material: More examples
{"url":"https://www3.wolframalpha.com/examples/science-and-technology/physics/optics/refraction","timestamp":"2024-11-06T08:33:51Z","content_type":"text/html","content_length":"72414","record_id":"<urn:uuid:29eed757-77a8-4011-83a7-2c40b79c1898>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00396.warc.gz"}
Evaluation of global teleconnections in CMIP6 climate projections using complex networks Articles | Volume 14, issue 1 © Author(s) 2023. This work is distributed under the Creative Commons Attribution 4.0 License. Evaluation of global teleconnections in CMIP6 climate projections using complex networks In climatological research, the evaluation of climate models is one of the central research subjects. As an expression of large-scale dynamical processes, global teleconnections play a major role in interannual to decadal climate variability. Their realistic representation is an indispensable requirement for the simulation of climate change, both natural and anthropogenic. Therefore, the evaluation of global teleconnections is of utmost importance when assessing the physical plausibility of climate projections. We present an application of the graph-theoretical analysis tool δ-MAPS, which constructs complex networks on the basis of spatio-temporal gridded data sets, here sea surface temperature and geopotential height at 500hPa. Complex networks complement more traditional methods in the analysis of climate variability, like the classification of circulation regimes or empirical orthogonal functions, assuming a new non-linear perspective. While doing so, a number of technical tools and metrics, borrowed from different fields of data science, are implemented into the δ-MAPS framework in order to overcome specific challenges posed by our target problem. Those are trend empirical orthogonal functions (EOFs), distance correlation and distance multicorrelation, and the structural similarity index. δ-MAPS is a two-stage algorithm. In the first place, it assembles grid cells with highly coherent temporal evolution into so-called domains. In a second step, the teleconnections between the domains are inferred by means of the non-linear distance correlation. We construct 2 unipartite and 1 bipartite network for 22 historical CMIP6 climate projections and 2 century-long coupled reanalyses (CERA-20C and 20CRv3). Potential non-stationarity is taken into account by the use of moving time windows. The networks derived from projection data are compared to those from reanalyses. Our results indicate that no single climate projection outperforms all others in every aspect of the evaluation. But there are indeed models which tend to perform better/worse in many aspects. Differences in model performance are generally low within the geopotential height unipartite networks but higher in sea surface temperature and most pronounced in the bipartite network representing the interaction between ocean and atmosphere. Received: 02 Jun 2022 – Discussion started: 14 Jun 2022 – Revised: 11 Nov 2022 – Accepted: 22 Nov 2022 – Published: 12 Jan 2023 The evaluation of general circulation models (GCMs) is one of the key topics of climate sciences. This evaluation is indispensable in the assessment of uncertainties in the projection of climate change. At the same time, it serves as a guideline for further model development. Established methods of climate model evaluation include comparison of spatial and temporal means, and often also the variability, of important climate parameters such as air temperature, precipitation, wind speed, geopotential height, radiation, and energy fluxes between model output and observational/reanalysis data (Zhang et al., 2021). More elaborate evaluation techniques assess the temporal evolution of global mean/sea surface/hemispheric temperature (Papalexiou et al., 2020) with respect to increasing greenhouse gas concentration or regional trends (Duan et al., 2021). Acknowledging its importance for consistent climate simulation, Simpson et al. (2020) evaluate the atmospheric circulation in terms of mean atmospheric fields, in combination with dynamical features like the jet stream, stationary waves, and blocking. In contrast, Kristóf et al. (2020) evaluated the positions of potential action centres of atmospheric teleconnections as a proxy for circulation. Another approach is taken by Brands (2022) and Cannon (2020), who both assess circulation biases in correspondence to the representation of circulation types. Whereas Brands (2022) uses Lamb weather types, the analysis in Cannon (2020) is based on principal component analysis (PCA)-derived modes of variability. Such modes of variability, extracted by eigentechniques from spatio-temporal gridded data, have been the objective of evaluation efforts in recent years as their spatial patterns are supposed to reflect large-scale dynamical processes in the climate system. For example, Fasullo et al. (2020) and Coburn and Pryor (2021) have assessed the representation of six oceanic and atmospheric modes in terms of spatial and spectral accuracy, including an evaluation of the interaction between modes. Still, it has been recognised that eigenmethods suffer from a number of limitations because geometric constraints such as linearity and normality, orthogonality, and simultaneity do not correspond to physical properties of the climate system (Monahan et al., 2009; Fulton and Hegerl, 2021; Hynčica and Huth, 2020; Lee et al., 2019) and hinder their interpretation. Besides, the evaluation of climate modes, such as El Niño–Southern Oscillation (ENSO) or North Atlantic Oscillation (NAO), is usually done at the component level. But it is the coupling among those components which defines the large-scale variability in climate at interannual and decadal timescales (Tsonis et al., 2008; Steinhäuser and Tsonis, 2014). Complex network methods are able to account for non-linear, time-lagged, and high-order interactions in high-dimensional data and were introduced in climate sciences by the beginning of the 21st century (for an overview see Dijkstra et al., 2019). Such networks investigate the interdependencies between all their constituent components, thereby unveiling dynamical features that could remain hidden to traditional analysis techniques. A rather fundamental property of climate networks is their organisation in terms of communities – clusters of strongly connected nodes forming semi-autonomous subcomponents of the climate system with non-accidental similarity to many known modes of variability (Steinhäuser et al., 2009; Tsonis et al., 2011; Tantet and Dijkstra, 2014) that interact dynamically in multiple ways. Such an emergent property has been ascribed to the mismatch between spatial and temporal scales on a sphere, which allows only a finite number of degrees of freedom (Yang et al., 2021). The comparison of such complex network-derived communities between climate simulations and observation/reanalysis data sets was used for evaluation purposes first by Steinhäuser and Tsonis (2014). They assessed the community structure in climatic fields finding rather low consistency between the model runs and the reference data set. Likewise, Fountalis et al. (2015) assessed the community structure of model simulations but complemented it with an evaluation of the interaction strength of the communities with ENSO. The idea was further developed by Fountalis et al. (2018) and Falasca et al. (2019) in their so-called δ-MAPS approach to comprise a whole network of all communities, which is evaluated with regards to the distribution and size of communities, the interaction strength, and the distribution of the links. Note that there is another line of research into the evaluation of causal networks (for instance Vázquez-Patiño et al., 2019, or Nowack et al., 2020) which is somewhat different to the approach followed here. In the present article, we explain (Sect. 3) and apply (Sect. 4) δ-MAPS (Fountalis et al., 2018) to construct functional networks for sea surface temperature (SST) and geopotential height at 500hPa (Z500) fields, as well as a cross-network between SST and Z500, using GCM output data from the Coupled Model Intercomparison Project Phase 6 (CMIP6). We compare the derived networks to analogous networks from reanalysis data, namely CERA-20C (Laloyaux et al., 2018) and 20CRv3 (Slivinski et al., 2019), to evaluate the capacity of the GCMs in reproducing complex non-linear processes in the atmosphere and the ocean. This assessment is all the more instructive as it is not possible to tune the teleconnections directly. In nature and in models, teleconnections emerge from the interplay of the governing equations under the condition of the boundaries. A model gets them right if and only if the model specifications are sufficiently well approximated and well balanced between model components. The objective of the present study is to compare the interaction networks derived from CMIP6 GCM output from historical simulations to reference networks derived from two century-long reanalyses in order to account for uncertainties in observations and differences in construction methods as recommended by Hynčica and Huth (2020), Lee et al. (2019), and others: (i) the Coupled Reanalysis for the 20th Century (CERA-20C) provided by the European Centre for Medium-Range Weather Forecasts (ECMWF; Laloyaux et al., 2018; 10 ensemble members and ensemble mean) and (ii) the NOAA–CIRES–DOE Twentieth Century Reanalysis version 3 (20CRv3) provided by the National Oceanic and Atmospheric Administration (NOAA)/Physics Science Laboratory (PSL) (Slivinski et al., 2019; best estimate). The presented study is intended to help the selection of physically plausible GCM runs for further dynamical downscaling in the Coordinated Downscaling Experiment–European Domain (https:// www.euro-cordex.net/, last access: 3 February 2022). Therefore, the CMIP6 model ensemble evaluated here follows the list of model runs under consideration in EURO-CORDEX for which all necessary forcing data had been provided at the time of writing, plus some extra models (Table 1). Bi et al. (2020)Ziehn et al. (2020)Wu et al. (2019)Swart et al. (2019)Danabasoglu et al. (2020)Cherchi et al. (2019)Cherchi et al. (2019)Voldoire et al. (2019)Séférian et al. (2019)Döscher et al. ( 2022)Döscher et al. (2022)Roberts et al. (2019)Boucher et al. (2020)Tatebe et al. (2019)Hajima et al. (2020)Gutjahr et al. (2019)Müller et al. (2018)Yukimoto et al. (2019)Seland et al. (2020)Seland et al. (2020)Lee et al. (2020)Sellar et al. (2019) We consider the parameters sea surface temperature (SST) and geopotential height at 500hPa (Z500). These are relatively well-observed and smoothly varying fields suitable for the construction of networks. Steinhäuser et al. (2012) confirm good network properties for SST and Z500 with many proximity-based correlation links as well as a large number of teleconnections. In accordance, Donges et al. (2011) found the maximal link density for geopotential height at about 4 to 6km height, and Wiedermann et al. (2017) detected the highest transitivity between SST and geopotential height at From the coupled network perspective, it would be highly desirable to include further parameters into the analysis like sea surface salinity or, more interestingly, variables from the stratosphere and the deep ocean. Unfortunately, the observations of such parameters have only recently become more reliable and less sparse, such that the fidelity of their reanalysis fields is impossible to The SST (Z500) data were remapped to a common grid of $\mathrm{2.25}{}^{\circ }×\mathrm{2.25}{}^{\circ }$ ($\mathrm{2.5}{}^{\circ }×\mathrm{2.5}{}^{\circ }$) resolution. Regions with sea ice are avoided in SST as well as circles of 5^∘ radius around the poles at Z500 because of possibly biased representation of the polar vortices. The analysis is carried out for seasonal anomalies in the overlapping time period from 1901 to 2010. The procedure used to assign an assessment score to each model run comprises a number of algorithmic stages that build on each other. As they are not yet well known in the climatological community, we present them in detail in the following subsections: • Detrending with trend EOF (Sect. 3.1) • Network construction with δ-MAPS (Sect. 3.2) □ Domain identification (Sect. 3.2.1) □ Network of domains (Sect. 3.2.2) • Distance covariance and distance correlation (Sect. 3.3) □ Distance multivariance and distance multicorrelation (Sect. 3.3.1) • Comparison of networks with structural similarity index and multivariate network quality score (Sect. 3.4). 3.1Detrending with trend EOF Prior to the construction of the δ-MAPS networks, the data have to be detrended to avoid the correlations being distorted by long-term trends. Although it is still the most widely used technique, linear detrending has been shown to be hardly appropriate to remove the effects of external forcing (anthropogenic and natural) from climatic time series (Frankignoul et al., 2017), given its non-linear structure and the dynamical response mechanisms including long-range memory. Conventional empirical orthogonal function (EOF) decomposition is not well suited for trend detection either for a number of reasons (Hannachi, 2007), which often cause the spreading of long-term trends between several modes of internal variability. Instead, we apply a non-parametric technique, so-called trend EOF (Hannachi, 2007), which identifies spatial patterns of trends defined as a common non-linear, but monotone increase. The method is based on the singular value decomposition (SVD) of the matrix of inverse ranks, instead of the direct observations as in conventional EOF analysis. Since sequences of inverse ranks provide a robust measure of monotonicity, trend EOFs are able to separate patterns associated with monotone (non-linear) trends, albeit small, from patterns not associated with trends. Trend EOFs have been applied since in a number of studies (e.g. Barbosa and Andersen, 2009, Li et al., 2011, Meegan Kumar et al., 2021, among others). Fisher (2015) compared trend EOFs, along with conventional EOFs, to a selection of other PCA-based techniques, which are designed to extract space–time patterns maximising criteria like persistence, predictability, or autocorrelation. In contrast to conventional EOFs, all the tested methods very robustly detect a leading EOF pattern with a respective principal component (PC) that presents a distinct non-linearly increasing trend. We consider trend EOFs therefore to be an appropriate technique for identifying anthropogenic greenhouse gas (GHG)-forced trends. Let X=((x[it])) be the matrix of anomaly data at grid cells i=1…n (numbered consecutively) and times t=1…T. The time series x[i] at grid cell i is transformed to the vector of inverse ranks q[i] by setting q[it] equal to the time position of the tth-largest value in x[i]. The sequence q[i] indeed reflects the total monotonicity of x[i]: in monotone series the inverse ranks are ordered according to the trend. The stronger the trend in x[i], the stronger the pattern in q[i]. By maximising the correlation in Q=((q[it])), we find a common trend that is shared (to some extent) by all grid cells, which makes sense in light of GHG-forced warming. After centring and cosine weighting of Q with respect to the corresponding latitude, the principal components and the loading patterns are obtained by SVD: Q=UΣV^T. The trend is now concentrated in the first (few) principal component(s), strongly distinguished by high eigenvalue(s) standing out over the remaining low and slowly descending spectrum. If second- or third-order outstanding eigenvalues should be detected, they indicate additional, regionally confined independent trends, which are generated by internal dynamical feedback processes. For our purpose of identifying regions with coherent time evolution, we would therefore want to retain such regional trends and eliminate only the trend associated with the first trend PC. Likewise, regional trends caused by volcanic eruption are most probably not filtered either by the first trend EOF. However, the impacts of 20th-century eruptions lasted only for short time periods, and on the other hand they are not well represented in surface-input reanalyses like CERA-20C and 20CRv3 (Fujiwara et al., 2015). We therefore assume that our evaluations remain valid. The first trend PC u[1] is now transformed back to physical space by projection w[1]=Xu[1], and the corresponding spatial pattern is composed of the regression coefficients between the trend PC w[1] and the anomaly time series of the original field $ {\mathbit{x}}_{i},\phantom{\rule{0.33em}{0ex}}i=\mathrm{1}\mathrm{\dots }n$. To allow for an annual cycle in the trend patterns, we extend the trend EOFs in analogy to season-reliant EOFs (Wang and An, 2005; see also cyclo-stationary EOFs in Yeo et al., 2017), $\mathbf{Q}=\ left({\mathbf{Q}}_{\text{MAM}}|{\mathbf{Q}}_{\text{JJA}}|{\mathbf{Q}}_{\text{SON}}|{\mathbf{Q}}_{\text{DJF}}\right)$ (seasonally centred, inverse ranks calculated for each season individually), which extract a recurrent sequence of seasonal trend patterns with one associated trend PC for the magnitude of the whole cycle as opposed to one common pattern for all seasons as in non-seasonal EOF analysis or four individual patterns with their associated individual PCs as in seasonal EOFs, respectively. At this stage it would be possible to apply a secondary SVD to the seasonal warming patterns to obtain a smoother annual cycle. While such a procedure seems undue for seasonal data, it would be a reasonable approach in the case of monthly data. Instead of applying two sequential EOFs to Q, a tensor decomposition like higher-order singular value decomposition (HOSVD; De Lathauwer et al., 2000) would serve this purpose more elegantly. After having detrended the time series, we are able to standardise the seasonal variances without the interference of the seasonal trends, which would otherwise bias our estimates. On their part, seasonally varying variances could degrade the estimated correlations between grid cells in the first stage of the δ-MAPS algorithm, giving increased weight to seasons with higher variance. In turn, the spatial component of the variance will be important in the second stage of δ-MAPS; therefore we augment the deseasonalised time series again with their overall (non-seasonal) variance. 3.2Network construction with δ-MAPS 3.2.1Domain identification To infer the functional interactions within and between spatio-temporal gridded data sets of climatological parameters, we adopt the δ-MAPS algorithm proposed by Fountalis et al. (2018). This algorithm is rooted in network sciences/graphical modelling, in which graphs are used to express the dependence structure between random variables. A graph or network consists of a set of nodes connected by a set of edges, which describe the interactions between the nodes. Networks can be classified depending on their topology: simple networks like lattices and fully connected networks or complex networks like scale-free and small-world networks. Small-world networks are often observed in climate and other earth sciences, in the human brain, and in social networks. Their nodes are strongly clustered into semi-autonomous components, and the average shortest path length between any two nodes is small. In contrast to structural networks or flow networks, where the edges are physically observable (like wired connections or trajectories of particles, respectively), functional networks are inferred from the behaviour of the nodes. We consider the grid cells of a selected climatological field as the nodes of the graph. The spatial embedding is naturally given by the locations of the grid cells. In Fountalis et al. (2018) the edges of a fully connected grid-cell-level network are defined using the unpruned Pearson correlation ϱ of the time series as an association measure between any pair of nodes. Based on this weighted network, the δ-MAPS algorithm identifies semi-autonomous components D[1]…D[K], called domains. A domain is a spatially contiguous set of grid cells with highly correlated temporal activity. Fountalis et al. (2018) propose an iterative algorithm that alternately expands and merges a preliminary set of domain seeds S (neighbourhoods with locally maximal correlation, 3×3 grid cells in our case) so as to find the maximum possible sets of grid cells that satisfy the homogeneity constraint δ: let D be a spatially contiguous set of grid cells with cardinality $|D|$ $\begin{array}{}\text{(1)}& \mathit{\delta }\le {\mathit{\varrho }}_{D}:=\frac{\mathrm{1}}{|D|\left(|D|-\mathrm{1}\right)}\sum _{ie j\in D}{\mathit{\varrho }}_{ij}\phantom{\rule{0.125em}{0ex}},\end where ϱ[ij] is the correlation between the time series at grid cells i and j, and δ is a chosen parameter to regulate the number and size of the domains. The domains are expanded to neighbouring grid cells (one at a time) as long as ϱ[D]≤δ. Two domains D[i] and D[j] are merged if they contain at least one pair of adjacent grid cells, and their union still satisfies the threshold δ. The algorithm stops when no more domains can be merged or expanded. The number of domains K generated by this algorithm is not predefined. Overlapping domains are allowed in δ-MAPS because grid cells might be influenced by more than one physical process. If a grid cell does not satisfy the homogeneity constraint with any of its neighbours, it remains unassigned. Deviating from Fountalis et al. (2018), we use Spearman's rank correlation to determine the similarity between grid cells to allow for monotone, yet non-linear association. Furthermore, we set the threshold δ for minimal average correlation within a domain to equal a selected high quantile of all pairwise correlations (our δ is not based on a significance test; therefore there is no need to correct for auto-correlation). Lower thresholds allow the domains to expand and merge, further resulting in a smaller number of spatially larger domains, which means lower parcellation, and vice versa. In Sect. 4, we choose δ so as to produce “intuitive” domains evocative of known teleconnection patterns. In Falasca et al. (2020), the identification of domains was further refined: grid cells are assigned to a common domain if their time-varying complexity (quantified by recurrence entropy) evolves coherently. Coherent evolution of complexity reflects coherent dynamical evolution and is thus an even stronger indicator of semi-autonomous component organisation than correlation between the original climatological time series. But for complexity time series to be constructed, the proposed recurrence measure has to be evaluated on moving time windows (100-year windows over 6000 years of monthly values in Falasca et al., 2020). Unfortunately, our time series are not long enough to detect complexity changes by means of recurrence entropy (nor to actually occur in the real climatological fields), so we have to stick to the original definition of δ-MAPS in Fountalis et al. (2018). The first stage of δ-MAPS is a local community detection algorithm, where the criterion to maximise is the number of grid cells assigned to a minimum number of communities under the conditions (i) ϱ[ D]≥δ, (ii) D being spatially contiguous, and (iii) D containing a seed s∈S (Fortunato and Hric, 2016). As this problem is NP-hard (solvable in polynomial time; Fountalis et al., 2018), the greedy algorithm of Fountalis et al. (2018) only approximates one possible solution. Despite this, it is able to detect meaningful communities of any size (no preferred scale) and independently from the network structure in other spatial regions. 3.2.2Network of domains Subsequently, the domains identified above serve as super-nodes in the second stage of δ-MAPS. A functional weighted network is inferred between the domains on the basis of a dependence measure (in Fountalis et al., 2018, the lagged Pearson correlation is used; we use distance correlation; see Sect. 3.3). The time series of a domain is defined as $\begin{array}{}\text{(2)}& {\mathbit{x}}_{D}=\left({x}_{D\mathrm{1}}\mathrm{\dots }{x}_{DT}\right),\phantom{\rule{0.33em}{0ex}}\phantom{\rule{0.33em}{0ex}}{x}_{Dt}=\frac{\mathrm{1}}{{\sum }_{i\in D} \mathrm{cos}{\mathit{\phi }}_{i}}\sum _{i\in D}{x}_{it}\mathrm{cos}{\mathit{\phi }}_{i}\phantom{\rule{0.125em}{0ex}},\end{array}$ where φ[i] is the latitude of grid cell i. In contrast to Falasca et al. (2019), we use the means instead of the sums of the grid cells for domain time series. We do so because otherwise the variances of the domains would grow with their size, something that would hinder interpretation. On the other hand, the spatial correlation within the domains, the precondition for grid cells to form a domain, impedes the decrease in the variance of the domain mean following the central limit theorem at the rate of $\sqrt{|D|}$. Instead, the variances of the domain means are of comparable magnitude regardless of the domain size. Every possible link with every possible lag $-L\le l\le L$ is tested for significance, which constitutes a multiple-testing problem such that the cumulative probability of type I errors increases. One way to control the false discovery rate FDR to be smaller than a predefined level α was proposed by Benjamini (2010): the p levels of the individual tests are in ascending order, ${p}_{\left(\ mathrm{1}\right)}\le \mathrm{\dots }\le {p}_{\left(\frac{\mathrm{1}}{\mathrm{2}}K\left(K-\mathrm{1}\right)\left(\mathrm{2}L+\mathrm{1}\right)\right)}$, and the hypothesis (H[0]; link is insignificant) is rejected only for those tests where ${p}_{\left(k\right)}<\frac{\mathrm{2}k\mathit{\alpha }}{K\left(K-\mathrm{1}\right)\left(\mathrm{2}L+\mathrm{1}\right)}$. The network consists of two maps, D (D: set of nodes (grid cells)⟶power set of domains𝒫(D[1]…D[K]), which assigns one/several/no domains to every grid cell) and W (W: set of pairs of domains $\ mathit{\left\{}{D}_{\mathrm{1}}\mathrm{\dots }{D}_{K}\mathit{\right\}}×\mathit{\left\{}{D}_{\mathrm{1}}\mathrm{\dots }{D}_{K}\mathit{\right\}}\phantom{\rule{0.33em}{0ex}}⟶$ maximal (lagged) dependence ∈ℝ, which assigns every pair of domains a link that equals the maximal (lagged) dependency between them; we allow lags up to 10 seasons). The distinction between grid cells that are dependent within the same domain and grid cells that are dependent across two different domains allows δ-MAPS to differentiate between local diffusion phenomena and remote interactions as for instance an atmospheric bridge or an oceanic tunnel (Liu and Alexander, 2007). Since the techniques to construct the δ-MAPS network are statistical, long time series are convenient in order to obtain robust estimates of the dependence measures. In the case of non-stationarity, such estimates would be biased and reflect only a temporal average connectivity between the components of the network. The time dependence can be addressed using evolving networks, which are constructed over sliding time windows (see for instance Kittel et al., 2021, and Novi et al., 2021). The present study considers a time-constant network for the period 1901–2010 and a shorter-period network for 1951–2010, where more observations are available for assimilation into the reanalyses. To investigate the temporal evolution, a third network is constructed for 1901–1955. The complex network framework offers a lot more approaches in order to exploit the richness of the data, as for instance multi-scale, causal, and multi-layer networks. Wavelet multi-scale networks were proposed for investigating interactions in the climate system simultaneously at different temporal scales, revealing features which usually remain hidden when looking at one particular timescale only (Agarwal et al., 2018, 2019). Interactions between processes evolving on different timescales are investigated by Jajcay et al. (2018). Moreover, as the number of identified domains within a climatological field is drastically smaller than the number of original grid cells, this also opens up the possibility of investigating the causal relationships between them (Nowack et al., 2020), although the basic assumption of causal network inference that the dependence structure can be represented by a directed acyclic graph is questionable in the climate context. The construction of both dependence-based and causal networks can naturally be extended to cross-networks, which include multiple fields (Feng et al., 2012; Ekhtiari et al., 2021). 3.3Distance covariance and distance correlation As physical processes in climate are highly dynamical and mostly non-linear (Donges et al., 2009), we decided to substitute the Pearson correlation in the second step of network inference by a non-linear dependence measure: distance correlation proposed by Székely et al. (2007). To begin with, distance covariance, calculated from the pairwise Euclidean distances within each sample, is an analogue to the product-moment covariance, but it is zero if and only if the random vectors are independent. The intuition of distance covariance is that if there exists a dependence between the random variables X and Y, then for two similar realisations of X, say x[s] and x[t], the two corresponding realisations of Y, y[s] and y[t], should be similar as well. Note that the opposite (x[s], x [t] unsimilar ⟹y[s], y[t] unsimilar) is true for linear dependence, but not true in general. Unlike the widely used information measures, distance covariance has a compact representation, is computationally fast, and is reliable in a statistical sense for sample sizes common in climatology because it is not necessary to estimate the density of the samples. We use the unbiased version of distance covariance given in Székely and Rizzo (2014). Let (x[t]),(y[t]), t=1…T be a statistical sample from a pair of real or vector-valued random variables X and Y. First, compute all pairwise Euclidean distances: Then perform a double centring for all s≠t: $\begin{array}{ll}& {A}_{st}={a}_{st}-\frac{\mathrm{1}}{T-\mathrm{1}}\sum _{u}{a}_{su}-\frac{\mathrm{1}}{T-\mathrm{1}}\sum _{v}{a}_{vt}+\frac{\mathrm{1}}{\left(T-\mathrm{1}\right)\left(T-\mathrm{2}\ right)}\sum _{uv}{a}_{uv}\\ & {B}_{st}={b}_{st}-\frac{\mathrm{1}}{T-\mathrm{1}}\sum _{u}{b}_{su}-\frac{\mathrm{1}}{T-\mathrm{1}}\sum _{v}{b}_{vt}+\frac{\mathrm{1}}{\left(T-\mathrm{1}\right)\left(T-\ mathrm{2}\right)}\sum _{uv}{b}_{uv}\phantom{\rule{0.125em}{0ex}}.\end{array}$ Then distance covariance dCov is defined as $\begin{array}{}\text{(3)}& \text{dCov}\left(X,Y\right)=\frac{\mathrm{1}}{T\left(T-\mathrm{3}\right)}\sum _{st}{A}_{st}{B}_{st}\phantom{\rule{0.125em}{0ex}}.\end{array}$ Distance variance dVar and distance correlation dCor are defined analogously to moment variance and moment correlation, respectively: $\begin{array}{}\text{(4)}& \begin{array}{rl}& \text{dVar}\left(X\right)=\text{dCov}\left(X,X\right)\\ & \text{and}\\ & \text{dCor}\left(X,Y\right)=\frac{\text{dCov}\left(X,Y\right)}{\sqrt{\text Distance correlation has a number of desirable properties: 1. 0≤ dCor$\left(X,Y\right)\le \mathrm{1}$; 2. dCor$\left(X,Y\right)=\mathrm{0}⟺X,\phantom{\rule{0.33em}{0ex}}Y$ independent; 3. $\text{dCor}\left(X,Y\right)=\mathrm{1}⟺Y$ is a linear transformation of X. Distance correlation is furthermore robust against auto-dependence (Fokianos and Pitsillou, 2018), which eliminates the need to correct for autocorrelation, as was done in Fountalis et al. (2018). The correction of autocorrelation involves the estimation of a rather large number of autocorrelation coefficients. This might add to statistical uncertainty, and its expendability is therefore statistically advantageous. An efficient test of distance correlation based on the χ^2 distribution was proposed by Shen et al. (2022), which is universally consistent and valid for α≤0.05: Distance correlation is defined between vectors of arbitrary dimension. One way to take advantage of this property in the construction of networks would be to assign the measurement of more than one climatological variable to every node, e.g. sea surface temperature and salinity or 500hPa geopotential height and temperature. We apply distance correlation in the network inference between the domains, but not in the construction of the domains. The reason is that in domain construction we are looking for similar temporal behaviour between grid cells. We choose Spearman's rank correlation because it accounts for non-linear, yet monotone association. In contrast, in network inference we are expressly interested in non-linear dependence including non-monotonicity. 3.3.1Distance multivariance and distance multicorrelation Distance correlation has also been generalised to distance multivariance/multicorrelation by Böttcher et al. (2019) to measure the dependence between an arbitrary number n of random variables in the sense of Lancaster interaction (Lancaster, 1969; Streitberg, 1990). The Lancaster interaction ΔF quantifies the fraction of dependence between them that is not explained by factorisation, their synergy. For n=3, let F[123] be the three-dimensional joint distribution function of X[1], X[2], and X[3]; F[12], F[13], and F[23] the pairwise joint distributions; and F[1], F[2], and F[3] the marginal distribution functions. Then the Lancaster interaction is defined as $\mathrm{\Delta }F={F}_{\mathrm{123}}-{F}_{\mathrm{1}}{F}_{\mathrm{23}}-{F}_{\mathrm{2}}{F}_{\mathrm{13}}-{F}_{\mathrm{3}}{F}_{\mathrm{12}}+\mathrm{2}{F}_{\mathrm{1}}{F}_{\mathrm{2}}{F}_{\mathrm{3}}\ the fraction of F[123] that is not explained by pairwise dependence. Lancaster interaction excludes, in particular, linear dependence as this is indeed explained by pairwise dependence. The concept of higher-order dependence is related to joint cumulants and higher-order moments in that ${\mathit{\kappa }}_{n}\left({X}_{\mathrm{1}}\mathrm{\dots }{X}_{n}\right)=\int {x}_{\mathrm{1}}\ mathrm{\dots }{x}_{n}d\mathrm{\Delta }F$ (Streitberg, 1990). Joint cumulants are traditionally applied in multiple-point statistics and hyper-spectral analysis to describe non-linear interaction and non-Gaussian multidimensional distributions. Climate science has seen only a small number of implementations, including the contributions of Carlos A. L. Pires related to teleconnections (e.g. Pires and Hannachi, 2017, 2021). As a feature of complex systems, higher-order interactions have already been recognised as critical for the emergence of complex behaviour such as synchronisation and bifurcation in scientific fields as diverse as social networks science, ecology, molecular biology, quantum physics, neurosciences, epidemics, geodesy, image processing, and genetics (Battiston et al., 2020), and tools for the construction of hypergraphs (graphs with links that comprise more than two nodes) are increasingly available. To our knowledge, hypergraphs have not yet been introduced in climatology. Distance multivariance is defined analogously to distance variance (Eq. 1) and is a strongly consistent estimator of Lancaster interaction (Böttcher et al., 2019). For n=3 and C[st] the analogue to A [st] and B[st] for a third random variable Z is $\begin{array}{}\text{(6)}& \text{dMvar}\left(X,Y,Z\right)=\frac{\mathrm{1}}{T\left(T-\mathrm{3}\right)}\sum _{st}{A}_{st}{B}_{st}{C}_{st}\phantom{\rule{0.125em}{0ex}}.\end{array}$ Distance multicorrelation is defined likewise, with a slightly different normalisation: $\begin{array}{}\text{(7)}& \begin{array}{rl}& {\text{dVar}}_{\mathrm{3}}\left(X\right)=\text{dMvar}\left(X,X,X\right)\\ & \text{and}\\ & \text{dMcor}\left(X,Y,Z\right)=\frac{\text{dMvar}\left(X,Y,Z\ right)}{\left({\text{dVar}}_{\mathrm{3}}\left(X\right)\cdot {\text{dVar}}_{\mathrm{3}}\left(Y\right)\cdot {\text{dVar}}_{\mathrm{3}}\left(Z\right){\right)}^{\mathrm{1}/\mathrm{3}}}\end{array}\phantom Obviously, distance covariance between two random variables is covered by distance multivariance for n=2. Significance tests for distance multivariance are also given in Böttcher et al. (2019). As the asymptotic test is conservative, and furthermore, in the case of non-zero pairwise dependence, the test statistic is not guaranteed to diverge, it is convenient to choose a larger FDR level than the usually employed significance levels between 0.1 and 0.01. 3.4Comparison of networks with structural similarity index and multivariate network quality score This study aims at comparing the interaction networks derived from CMIP6 model output to the selected reference networks. Our metric of comparison netSSIM is a modification of the netCorr criterion for functional networks developed by Falasca et al. (2019). The netCorr is a sophisticated metric which evaluates the differences in topology and connectivity, combined in the adjacency matrix M of each network, simultaneously. Let $\mathbf{M}={\left(\left({M}_{ij}\right)\right)}_{i,j=\mathrm{1}}^{n}$ be a square matrix of dimension n (number of grid cells) with where we set $W\left({D}_{k},{D}_{l}\right)=\text{dCor}\left({\mathbit{x}}_{{D}_{k}},{\mathbit{x}}_{{D}_{l}}\right)$. Alternatively, M could be rearranged in a four-modal hypermatrix or tensor made of the Kronecker product of the lat–long field times itself containing the dependencies between the grid cells. Apart from replacing Pearson with distance correlation, our definition of M differs from the one in Falasca et al. (2019) in three aspects. Firstly, our links are undirectional because distance correlation is much less sensitive to temporal lag than Pearson correlation. The distance correlation coefficients for lags $-\mathrm{10}\le L\le \mathrm{10}$ differ only marginally from the value for L=0. So although we do construct M using maximum lagged distance correlation, we do not venture to infer the direction of the interaction from it. Secondly, we have defined $W\left({D}_{k},{D}_ {k}\right)=\mathrm{1}$, causing M[ij]=1 if x[i] and x[j] pertain to the same domain (and no other) to emphasise that grid cells within one domain are more strongly linked to each other than to the grid cells of other domains. Thirdly, we set M[ij] the average of the links between domains that x[i] and x[j] belong to instead of the maximum as a means to account for overlapping domains. We do not apply any weighting to this average because the mean internal rank correlation within each domain, i.e. the bond of a grid cell to its domains, is equally ≈δ by construction. The netCorr between two networks measures the spatial correlation between the respective adjacency matrices, not considering the overall level and variability within the networks. We propose to augment netCorr to netSSIM. SSIM is the structural similarity index, a measure very popular in image processing, which combines terms for brightness (mean), contrast (variance), and structure (pattern correlation) of images (Wang et al., 2004). It was introduced to the hydrological/meteorological community by Mo et al. (2014). Let X and Y be two gridded fields: $\begin{array}{}\text{(9)}& \mathrm{\text{SSIM}}\left(X,Y\right)=\frac{\mathrm{2}{\mathit{\mu }}_{X}{\mathit{\mu }}_{Y}+{c}_{\mathrm{1}}}{{\mathit{\mu }}_{X}^{\mathrm{2}}+{\mathit{\mu }}_{Y}^{\mathrm {2}}+{c}_{\mathrm{1}}}\cdot \frac{\mathrm{2}{\mathit{\sigma }}_{X}{\mathit{\sigma }}_{Y}+c\mathrm{2}}{{\mathit{\sigma }}_{X}^{\mathrm{2}}+{\mathit{\sigma }}_{Y}^{\mathrm{2}}+c\mathrm{2}}\cdot \frac {{\mathit{\sigma }}_{XY}+{c}_{\mathrm{3}}}{{\mathit{\sigma }}_{X}{\mathit{\sigma }}_{Y}+{c}_{\mathrm{3}}}\phantom{\rule{0.125em}{0ex}},\end{array}$ where μ[X] and μ[Y] are the means, ${\mathit{\sigma }}_{X}^{\mathrm{2}}$ and ${\mathit{\sigma }}_{Y}^{\mathrm{2}}$, are the variances, σ[XY] is the Pearson covariance between X and Y, and small constants ${c}_{\mathrm{1}}={c}_{\mathrm{2}}={c}_{\mathrm{3}}$ (we choose 0.00001) ensure regularity. The SSIM ranges from −1 to 1; it equals 1 only in the case of identity and −1 for an anti-analogue (equal mean and variance, but correlation $=-\mathrm{1}$). SSIM =0 means no similarity. Note that the SSIM is not invariant under translation and rotation, which corresponds to our requirements because we want the teleconnections to sit in the right place. SSIM is not a distance metric, but a distance metric can be constructed from it (Brunet et al., 2011). Falasca et al. (2019) recommend the use of their netCorr criterion always in combination with a criterion comparing the strength of the interaction, which they define as the sum of the links of a particular domain in terms of covariance. We argue that the strength is a criterion that intermingles the distribution of interactions between the domains with the variances of the domains, which, in turn, are determined by the size of the domains and the variances of the included nodes. We therefore prefer to evaluate the interactions on their own using the netSSIM. The evaluation of the variances (or standard deviations) of model output data is a task that is already routinely performed in conventional evaluation set-ups. We apply the (latitude-weighted) SSIM to two adjacency matrices M (Eq. 8) constructed from the significant distance correlations in two reference and/or model networks. In this way, we calculate netSSIM indices for the unipartite networks for SST and Z500 and for the cross-networks between the SST and Z500 domains. Alternatively, we could calculate the SSIM between adjacency matrices in a pointwise manner, comparing the slices of the four-modal hypermatrices that correspond to the links of one individual grid cell to all others and then taking the weighted mean of all pointwise SSIMs. Finally, we define a network quality score (NQS) by applying an exponential transform to the netSSIMs, which projects them to the interval [0,1] (recall that the netSSIM lives on $\left[-\mathrm{1},\ mathrm{1}\right]$). The same transform was used in Sanderson et al. (2015) and Brunner et al. (2020) to construct quality scores from error measures, which are later fed into a model selection $\begin{array}{}\text{(10)}& \text{NQS}:=\mathrm{exp}\left\{-{\left(\mathrm{1}-\text{netSSIM}\right)}^{\mathrm{2}}\right\}\end{array}$ In order to combine the three NQSs with respect to SST, Z500, and SST–Z500, we take the geometric mean (equal to the exponential of the arithmetic mean of the squared differences (1−netSSIM)^2). This shall be the multivariate network quality score MNQS: $\begin{array}{}\text{(11)}& \begin{array}{rl}\text{MNQS}:=& {\left({\text{NQS}}_{\text{SST}}\cdot {\text{NQS}}_{\text{Z500}}\cdot {\text{NQS}}_{\text{SST-Z500}}\right)}^{\frac{\mathrm{1}}{\mathrm {3}}}\\ =& \mathrm{exp}\left\{-\frac{\mathrm{1}}{\mathrm{3}}{∥\mathbf{1}-\text{netSSIM}∥}^{\mathrm{2}}\right\}\end{array}\phantom{\rule{0.125em}{0ex}}.\end{array}$ The MNQS corresponds to the exponential transform of the squared Euclidean distance between the three-dimensional vector-netSSIM and the ideal vector-netSSIM value $\left(\mathrm{1},\mathrm{1},\ mathrm{1}\right)$, which would be attained by a network identical to the reference, normalised with the distance between $\left(\mathrm{1},\mathrm{1},\mathrm{1}\right)$ and $\left(\mathrm{0},\mathrm {0},\mathrm{0}\right)$, $\left(\mathrm{0},\mathrm{0},\mathrm{0}\right)$ being the value which indicates no similarity. Any other vector norm could be utilised for the construction of MNQS, for instance an L[p]-norm with p≠2 or some weighting of the directions. The netSSIMs for additional parameters can be incorporated into the MNQS in a straightforward way. Finally, the considered models can be ranked with respect to these scores. The netSSIM is also useful when exploring the differences between networks in more detail. As mentioned above, the slices of M with respect to a single grid cell or domain can be compared one by one. It is further possible to calculate the netSSIM for all pairwise links in a certain region, excluding the rest of the globe, or for all links from one region to another. This way, differences across models or time periods can be tracked down directly to their origin. We demonstrate the functioning of every sub-procedure considering the CERA-20C ensemble mean over the whole period 1901–2010 as an example. All procedures are furthermore applied to the periods 1901–1955 and 1951–2010. Individual runs of CERA-20C as well as 20CRv3 and CMIP6 model realisations are discussed depending on special interest. 4.1Detrending with trend EOF Trend EOFs (Hannachi, 2007), as introduced in Sect. 3.1, produce time series of common change (in SST and Z500) generated from the trend PCs in the inverse-rank space and the respective trend-loading patterns (four seasonal trend-loading patterns per trend PC in the case of season-reliant/cyclo-stationary trend EOFs), indicating regions of stronger/weaker change. As expected, the increase in SST is concentrated in the first trend PC (the leading eigenvalues are 30 to 50 times higher than the trailing ones), the other trend PCs showing no secular trend. Figure 1a depicts the global mean sea surface temperature anomaly (GMSSTa) (with respect to the base period 1961–1990) in the CERA-20C ensemble mean, the forced temperature increase estimated by the first trend EOF and the detrended anomalies. For comparison, we show the same plot for linearly detrended SSTs in Fig. S1 in the Supplement. The grid-cell-wise detrended anomalies are deseasonalised with regard to variance. The GMSSTa derived from trend EOFs in all runs of CERA-20C (not shown) as well as in the ensemble mean show a very similar evolution among each other and to Zhu et al. (2018), the breakpoints in temperature increase postulated therein at 1942, 1975, and 2004 clearly discernible. Likewise, the physical space-loading patterns of the ensemble mean (Fig. 1b) and all runs of CERA-20C are very similar to each other and resemble the leading modes extracted using slow feature analysis and dynamical mode decomposition in Fulton and Hegerl (2021), identified as warming trends. Analogous plots for geopotential height anomalies at 500hPa for the CERA-20C ensemble mean can be found in Fig. 1c and d. Unfortunately, we were not able to find any comparable study in the literature, where Z500 was analysed for trend over the 20th century. Gillett et al. (2013), Knutson and Ploshay (2021), Garreaud et al. (2021), and Raible et al. (2005) considered sea level pressure (SLP) trends over different time periods and regions. Although not fully comparable, there is a certain similarity. The projected trends as well as the loading patterns in the 20CRv3 best estimate are somewhat different for the period 1901–2010 (Fig. S2) but agree much better for 1951–2010 (not shown). This might well be related to low observational coverage during the first half of the century; we thus take this disagreement as a signal for caution. When subject to the same procedure, the CMIP6 model output SST and Z500 anomalies produce trend EOFs and loading patterns roughly similar to CERA-20C and 20CRv3 (not shown). Differences are more or less obvious, though, such that an evaluation of the GMSSTa time series in the spirit of Papalexiou et al. (2020) would be an obvious choice but is out of the scope of this paper. 4.2δ-MAPS for CERA-20C on 1901–2010 4.2.1Domain identification Our algorithm, presented in Sect. 3.2.1, combines grid cells with highly rank-correlated time evolution into domains. Domains have to be contiguous but may overlap; grid cells may remain unassigned. Average mutual rank correlation within a domain has to be higher than a selected threshold δ; we examined the quantiles ${q}_{\mathrm{0.9}}\left({\mathit{\varrho }}_{ij}|ie j=\mathrm{1}\mathrm{\dots }n\right)\le \mathit{\delta }\le {q}_{\mathrm{0.99}}\left({\mathit{\varrho }}_{ij}|ie j=\mathrm{1}\mathrm{\dots }n\right)$ of all pairwise rank correlations. The plots included in this paper refer to thresholds q[0.95] for SST and δ=q[0.93] for Z500, chosen for their intuitive parcellation of the fields evocative of known teleconnection patterns. As varying the threshold affects the networks for different data sets in a similar way, the choice of δ changes the results only marginally. The domains constructed this way from the detrended, deseasonalised SST anomalies of the CERA-20C ensemble mean include all important SST teleconnection patterns with interannual to decadal timescales (see for example Messié and Chavez, 2011). The map of the CERA-20C SST domains (Fig. 2a) resembles the corresponding maps for COBEv2 and HadISST in Falasca et al. (2019) reasonably well, taking into account the differing data sets and time periods. Their main domains are clearly identifiable: El Niño–Southern Oscillation (ENSO; o11, for its broad extension also reminiscent of region 2 of the Interdecadal Pacific Oscillation (IPO) tripole in Henley et al., 2015), the horseshoe pattern (o7), the South Pacific (o9), the Indian Ocean (o3), the North Tropical Atlantic (o15, with extension to the extratropics), the South Tropical Atlantic (o1). Furthermore there are domains in the extra-tropical southern (o2) and eastern Indian Ocean (o4), the extra-tropical southern (o12) and north-eastern (o14) Atlantic and the Norwegian Sea (o16), the Gulf Stream (o13), the North Pacific Current (o8, region 1 of the IPO tripole), a domain corresponding to region 3 of the IPO tripole (o10), the Kuroshio Extension (o5), and a domain south of Australia including the Great Australian Bight (o6). Areas where sea ice occurs are omitted because of the confounding effect on SST. In the CERA-20C Z500 map of domains (Fig. 2b), the seasonally migrating Tropical Belt (TB; a15) formed by the Hadley circulation and the two polar cells (Arctic a1 and a13 largely overlapping and Antarctic a3) stand out, stretching around the whole globe. The mid-latitudes are populated by numerous domains with more (over ocean) or less (over land) pronounced zonal extension (cyclone tracks). The missing segmentation of the tropical belt into several domains probably results from the seasonal time resolution. 4.2.2Network of domains The domains of SST and Z500 are now ready for network construction (see Sect. 3.2.2). Figure 2c illustrates the maximum lagged ($-\mathrm{10}\le L\le \mathrm{10}$) distance correlations (Sect. 3.3) for all pairs of SST domains in the CERA-20C ensemble mean, omitting the geographic information for enhanced clarity. Only significant links to the FDR level α=0.05 (Sect. 3.2.2) are shown. However, even weak links are assessed as significant because the time series are long enough (4 seasons × 110 years) to allow the distance correlation to be estimated accurately. The darkest shades (except of the self links) correspond to the links between ENSO (o11), the horseshoe (o7), and IPO3 (o10): o11↔o7, o11↔o10, o7↔o10. We see enhanced connectivity of o7, o10, and o11 to the northern and southern Pacific Ocean (o8, o9) and from the Pacific to the Indian Ocean (o7/o10/o11↔o3), corresponding to known ENSO teleconnections, but not to the Kuroshio Extension (o5). The southern Indian Ocean domain is furthermore linked to the South Atlantic (o2↔o12). The intra-Atlantic links are much weaker: o14, o15, and o16 are largely overlapping domains and together conceivably form the Atlantic Multidecadal Oscillation (AMO); the Gulf Stream is linked to the north-eastern Atlantic (o13↔o14) as well as the tropical to the extra-tropical South Atlantic (o1↔o12). The South Atlantic is also weakly connected to all North Atlantic domains (o12↔o13/o14/o15/ o16), but there is no link between the North Atlantic and the South Tropical Atlantic (o1). It might be hypothesised that this bypass is related to the thermohaline circulation that tunnels the shallow subtropical cell (Liu and Alexander, 2007). According to the network, the Atlantic is connected to the other oceans only via the Southern Ocean, with links o12/o13/o14/o16↔o9, o14/o15/o16↔o6, o1/o12↔o2, and o16↔o2 that appear rather weak, although visible against their virtually zero background. A link between the South Tropical Atlantic (o1) and ENSO (o11) as proposed in Falasca et al. ( 2019) and Rodríguez-Fonseca et al. (2009) is not apparent in our network. This absence is likely caused by the non-stationarity of this link, which was not observed before 1970. Nevertheless, it does appear when a network is constructed for the period 1971–2010 (not shown). Note that allowing for lagged dependence changes the network only marginally compared to a network with only instantaneous links. Few connections are increased in strength of distance correlation by more than 0.05 and none by more than 0.1. All links already exist in the instantaneous network, and the structure of the network remains unchanged. The network between CERA-20C Z500 domains (Fig. 2e) is considerably weaker than the SST network, possibly a consequence of the stronger high-frequency variability in the Z500 time series in response to seasonally varying solar forcing combined with weaker low-frequency variability caused by stronger mixing of the freely flowing air masses. Moreover, many of the known atmospheric teleconnections vary considerably throughout the year, which weakens the all-season dependence between the involved domains. Apart from the overlapping domains a1/a13, a9/a10, and a7/a12, the Tropical Belt (a15) is the most strongly connected domain with links to the mid-latitudinal Ferrel cell domains, enveloping the cyclone tracks, over all oceans (a15↔a4/a7/a9/a10/a12/a14), to which the undisturbed Hadley circulation releases a substantial amount of energy. Domains over land have fewer and weaker links. Known atmospheric teleconnections are clearly identifiable: the Pacific North America Pattern (PNA) with links a10↔a11, a10↔a14, but interestingly not a11↔a14, and the North Atlantic Oscillation (NAO) with a link a13↔a14 (and much weaker a1↔a14). Other complex teleconnections also seem to involve the Arctic domains: a1↔a2/a6/a8/a16 and a13↔a8. In contrast, the Antarctic domain (a3) is largely autonomous, as discussed in Spensberger et al. (2020). Lagged dependence is irrelevant in the Z500 We notice that many known atmospheric teleconnections are defined as higher-order modes of some EOF decomposition. As such they exist only as additive modulations of their corresponding leading modes. We would therefore not expect to find many of them in our networks. Network methods allow the investigation of interactions between different climatological fields in a straightforward way, constructing cross-networks between (in our case) SST and Z500 domains that describe the coupled ocean–atmosphere variability (Liu and Alexander, 2007). We notice that the inference of links between the domains of two unipartite networks is different from the construction of bipartite communities in multi-layer networks as in Ekhtiari et al. (2021). Here, we just calculate the distance correlations between pairs of one SST and one Z500 domain. The inferred CERA-20C SST–Z500 cross-links are shown in Fig. 2d. The connectivity is mostly quite weak, except for the cross-links from the Tropical Belt (a15) and the northern and southern Pacific Z500 domains (a9, a10, a12) to the ENSO-related SST domains (o7, o8, o9, o10, o11) and the tropical Indian Ocean (o3), but also the Great Australian Bight (o6). This feature was also observed by Feng et al. (2012), who related it to the Walker circulation. Z500 domains a4 and a7 participate in this pattern, but to a lesser extent. Z500 domains over oceans are usually connected to their underlying SST counterparts (a4↔o2, a7↔o6, a9/a10↔o8, a14↔o13 (SST modulating the NAO), a15↔o3/o11), although in the Atlantic this dependence is exceptionally weak (a15↔o1/o15, a16↔o14, a3↔o12). But teleconnections to more distant SST domains are, in some instances, as strong as or even stronger than those proximate cross-links (a4↔o3/o10/o11/o12, a7↔o12, a14↔o15). Interestingly, the Arctic Z500 domain (a13) is weakly linked to the AMO domain (o15), but not to the North Pacific. Except for slightly increased overall connectivity levels, supposedly mediated by the SST, allowing for lagged dependence does not change the network. The analogous plot for 20CRv3 can be found in Fig. S3. 4.2.3Third-order interactions As before, this subsection presents only results for CERA-20C over the time period 1901–2010. The overall high level of connectivity between SST domains motivated us to take a deeper look into the dependence structure of the climate system. In a modest first attempt, we search for interacting triples in the sense of Lancaster, in graph theory termed as 2-hyperedges, taking all combinations of three SST domains and calculating their third-order distance multicorrelation as introduced in Eq. (7) in Sect. 3.3.1. As discussed there, we choose a large FDR level α=0.2 in order to not suppress too many distance multicorrelations. To avoid cumbersome evaluations with different lag combinations, we stick to instantaneous networks. Only a small number (13) of significant third-order dependencies are detected (we list them in Table 2 instead of plotting them), all somehow related to the ENSO phenomenon, one of them the IPO tripole itself. The hyperedges also include the tropical Indian Ocean (o3) and the Great Australian Bight (o6). As the nature of Lancaster interaction is inherently non-linear, this concentration on ENSO corresponds to the findings in Hlinka et al. (2014), who detect substantial non-linear contributions to mutual information in SST (apart from trends and seasonal variance) mainly in the central tropical Pacific. Likewise, Pires and Hannachi (2017) find synchronised extremes of uncorrelated PCs of SST in the Pacific that cannot be explained by linear interaction. Despite this, one distance multicorrelation is also detected in the North Atlantic: the SST triple (o14, o15, o16), which corresponds to the AMO. Note that not every triple with strong pairwise dependencies also has a significant third-order dependence. Table 2 shows the hyperedges along with their distance multicorrelation and the sum of their pairwise distance correlations. As distance multicorrelation is symmetric, every significant hyperedge is listed only once in the table. Note also that the sum of pairwise distance correlations is not bounded by 1 because the pairwise dependencies are not mutually exclusive. Although the detected distance multicorrelations are significant, they are at most 20% of the sum of the respective pairwise distance correlations. That means third-order interactions complement but not outweigh pairwise dependence in the three-dimensional joint dependence. The same comments essentially apply to cross-hyperedges consisting of two SST domains and one Z500 domain or one SST domain and two Z500 domains. We detected 15 and 5 significant cross-hyperedges, respectively, in the Pacific, which all resemble some ENSO interaction. The Z500 domains a13 and a14 (NAO) have no notable distance multicorrelation with North Atlantic SST domains, indicating that the North Atlantic is linked to the NAO domains on a pairwise basis (o15↔a13/a14), but no higher-order interaction is taking place. There is no hyperedge of three Z500 domains with significant multicorrelation. Known atmospheric tripoles like the Arctic Oscillation (a9, a13, a14) and the Pacific North America Pattern (a10, a11, a14) apparently lack significant third-order dependence. We believe that the construction of higher-order networks including hyperedges by means of distance multicorrelation might well be one step towards understanding the synergies emerging from multivariate coupling of large-scale oceanic/atmospheric teleconnections. 4.3Comparison of networks 4.3.1Reference networks We turn to the comparison of reference networks in terms of the NQS and MNQS criteria (see Sect. 3.4), calculated from the adjacency matrices M containing the regionally distributed distance correlation links between all pairs of domains (Eq. 8). As CERA-20C was produced as a 10-member ensemble representing the inevitable sampling and modelling uncertainty inherent in the production process, we take this opportunity to construct the δ-MAPS networks individually for each member. The results are matched to the networks derived for the CERA-20C ensemble mean. The CERA-20C individual networks for the complete time period 1901–2010 are very similar to each other, with average NQSs close to 1 for all three parameters (average NQS_SST=0.98, average NQS_Z500=0.94, average NQS_SST–Z500=0.96), such that the MNQSs have a mean of 0.96 with only a small spread. The average MNQS between the individual CERA-20C runs and the CERA-20C ensemble mean is 0.96. The small differences are brought about by the pattern correlation factor in netSSIM, the mean and variance factor being virtually equal to 1. The networks for the shorter periods 1901–1955 and 1951–2010 are equally similar with average MNQS=0.95 and 0.95 between runs and 0.95 and 0.96 for the ensemble mean, respectively. Because the networks for individual CERA-20C runs and the CERA-20C ensemble mean are nearly indistinguishable, we only take the CERA-20C ensemble mean networks for reference in the following comparisons. When analysing the temporal evolution of the connectivity in the CERA-20C ensemble mean, we find good agreement between the first and second half of the century (MNQS=0.87; Table 3), resulting from comparable differences in the SST and SST–Z500 networks and higher similarity at Z500 (NQS_SST=0.84, NQS_Z500=0.93, NQS_SST–Z500=0.84). In contrast, the full period is more similar to the first half in all networks (MNQS=0.96, NQS_SST=0.95, NQS_Z500=0.96, NQS_SST–Z500=0.95) than to the second half because especially the SST–Z500 networks bear more differences (MNQS=0.92, NQS_SST= 0.92, NQS_Z500=0.94, NQS_SST–Z500=0.89). We emphasise that the networks contain only information about the strength of the dependencies between the domains and not about their functional form. Because of deviating domain extension and numbering, comparing the networks by means of the rectangular network plots (like in Fig. 2c–e) is cumbersome. In Fig. 3 we have plotted two-modal slices of the spatially distributed adjacency hypermatrices M with respect to grid cells in the ENSO domain, in the AMO domain and in the Tropical Belt, respectively. The comparison of these slices is evidently not exhaustive but may give a hint regarding the nature of the differences between the networks. The domains in the three CERA-20C SST network slices for the ENSO domain (Fig. 3a–c) are very similar in shape and size, but the links between the domains are differently distributed. The networks most obviously disagree in link strength from ENSO to the tropical Indian Ocean, but also from ENSO to the North Tropical Atlantic, to the North Pacific, and to the Southern Ocean. The same is visible in the network slices for the AMO domain (Fig. 3d–f). In contrast, the CERA-20C Z500 network slices for the Tropical Belt (Fig. 3g–i) bear more apparent similarity than the SST network slices, which was already apparent in the network quality scores above. Although the shape of the tropical belt differs slightly more than the shape of the ENSO domain, the links to the rest of the globe resemble each other more strongly. However, the domains over the North Pacific and the southern Indian Ocean seem somewhat ambiguous. The cross-links from ENSO to the Z500 domains (Fig. 3j–l) and from the Tropical Belt to the SST domains (not shown) show differences similar to the unipartite networks. Yet, the stabilising effect of the self-links (large patches with distance correlation 1) does not apply to the SST–Z500 cross-networks, such that the network scores may turn out a little lower. As regards the second reanalysis 20CRv3, we observe strong similarity to the CERA-20C ensemble mean in the two shorter time periods 1951–2010 and 1901–1955 (MNQS=0.89 and MNQS=0.88; Table 3 and Fig. S4), where disagreement within the same time period is mainly restricted to higher southern latitudes (remember that the SSIM includes an area weighting). But dissimilarities between the first and the second half of the century are stronger in 20CRv3 than in CERA-20C (MNQS=0.81 and MNQS=0.87; Table 3). Notably, in 1901–1955 20CRv3 shows the same strong connection between SST domains around the whole tropics as CERA-20C, which is lost in 1951–2010 in both reanalyses. In contrast, the similarity between 20CRv3 and CERA-20C is slightly reduced in 1901–2010 (MNQS=0.82; Table 3 and Fig. S4) mainly due to differing atmospheric interactions and the weaker cross-links in 20CRv3 compared to CERA-20C (NQS_SST=0.94, NQS_Z500=0.81, NQS_SST–Z500=0.72). In all three networks (SST, Z500, SST–Z500) we observe that regional unsimilarity increases with latitude. Table S1 shows pairs of most similar domains between CERA-20C and 20CRv3 along with their domain-wise network quality Besides, the similarity between the different time periods in 20CRv3 is not the same as in CERA-20C, with 1901–2010 more similar to 1951–2010 than to 1901–1955 (MNQS=0.84 and MNQS=0.90; Table 3 and Fig. S4). For example, in contrast to CERA-20C, the link between ENSO and the South Pacific vanishes after 1950 in 20CRv3. This might be a consequence of sparse observations in the first half of the century and thus a stronger dynamical heritage from the models used to produce the reanalyses. On the other hand, there might have been changes in connectivity driven by increasing GHG levels, which are not equally reflected in CERA-20C and 20CRv3 (they are model results after all). Caution leads us therefore to restrict the comparison of CMIP6 data sets to reanalyses in the period 4.3.2CMIP6 networks The networks belonging to the CMIP6 historical projections (listed in Table 1) are compared in Fig. 4 to the CERA-20C ensemble mean (bold black cross marks) and to the 20CRv3 best estimate (bold red cross marks) in the time period 1951–2010 in terms of individual network NQSs (for SST networks (a), for Z500 networks (b), and for the cross-networks (c)) and in terms of MNQSs for each reference, respectively (d). Finally we take the average of both MNQSs to account for the uncertainty inherent in the reanalyses: $\mathrm{1}/\mathrm{2}\left(\text{MNQS}\left(\text{CERA-20C}\right)+\text{MNQS}\ left(\text{20CRv3}\right)\right)$ ((e), bold cross marks). As expected, the similarity between models and references is generally weaker than between references, although in the Z500 networks some models reach a comparable level. Network quality scores are highest for Z500, followed by SST and SST–Z500. SST–Z500 cross-networks show the greatest deviations across models as well as across references. The seemingly contradictory scores for Z500 with respect to CERA-20C and 20CRv3 have to be put into perspective with their very high values and can be traced back to the differences between the reanalyses. When applying the alternative, pointwise SSIM calculation (Fig. 4, thin black and red cross marks), the final average MNQS values are somewhat lower in their overall level, but similar in spread, and the model ranking suffers only minor changes. The differences between the reanalyses are also reflected in the MNQSs of the models, where the reanalyses agree very well upon some models (HadGEM3-GC31-LL, IPSL-CM6A-LR, MPI-ESM11-2-HR, MIROC-ES2L, MIROC6) but less upon others (MRI-ESM2-0, TaiESM1, CNRM-CM6-1, CNRM-ESM2-1). But altogether, a tendency to differentiate between more/less similar models with respect to reanalyses is clearly visible. We conclude that, when combining several references from independent sources, the average MNQS over these references is a valid evaluation instrument for assessing whether the teleconnections between large climate components in a general circulation model are realistically represented. Still, as our evaluation is restricted to a single run per model, we are not able to differentiate between good runs and good models as such. Using the example of four of the highest-ranking GCM runs with respect to MNQS, we illustrate in short the opportunities offered by the δ-MAPS approach to detect model deficiencies. We examine some of the pointwise adjacency maps of EC-Earth3, UKESM1-0-LL, MPI-ESM1-2-HR, and IPSL-CM6A-LR in comparison to CERA-20C and 20CRv3 over 1951–2010 (Figs. S5–S9). In the SST networks, we notice that differences are not restricted to higher latitudes, as was the case for the two reanalyses. Even in the main feature of interannual variability, ENSO, spatial connectivity deviates significantly. In all models the tropical Indian Ocean depends much more strongly, although to varying degrees, on ENSO than in both reanalyses (Fig. S5). EC-Earth3 and IPSL-CM6A-LR do not at all reproduce the northern extension of the ENSO domain seen in both reanalyses (Fig. S5a, d, e, f), which reflects the widely recognised low-frequency interdependency between ENSO and the Pacific Decadal Oscillation (PDO) (Henley et al., 2015). The links to the southern Indian Ocean and the South Atlantic differ considerably across models, but no model shows better performance in all domains. In MPI-ESM1-2-HR the dependence between AMO and ENSO is exaggerated, whereas in IPSL-CM6A-LR the Norwegian Sea is nearly disconnected from the Tropical North Atlantic, which is not consistent with AMO (Fig. S6c and d). As regards Z500, UKESM1-0-LL shows an unrealistic link between the Tropical Belt and the Antarctic domain (Fig. S7b). At the same time, the dependence of the Arctic domain is matched well only in UKESM1-0-LL (Fig. S8b). In contrast, the cross-links from ENSO to Z500 are well represented in all four models (Fig. S9). Continuing the analysis of all pointwise adjacency maps, it would be possible to identify regions/climate phenomena of higher and lower confidence in any model, an exercise that might be instructive for both modelling groups and downstream users of climate projections. In order to evaluate the physical plausibility of CMIP6 GCM output, we have constructed functional interaction networks within and between the SST and Z500 multivariate time series of 2 reanalyses (CERA-20C and 20CRv3) and 22 GCM output data sets using the δ-MAPS procedure. In response to several theoretical challenges related to the nature of long-term climate data, a number of innovations were introduced into δ-MAPS: • detrending with season-reliant trend EOFs • network construction using distance correlation • distance multicorrelation for higher-order interactions • network comparison with the structural similarity index • construction of a multi-reference multivariate network quality score. First of all, the two reanalyses were compared to one another in considerable detail, including the temporal evolution of the interactions in the course of the 20th century. It could not be excluded that inconsistencies between the first and second half of the century arise at least partly from data uncertainty. The evaluation of CMIP6 model output against the references revealed a very high general similarity of the atmospheric connectivity, though with gradual differences. Oceanic teleconnections are less accurately reflected and the model differences more pronounced. The strongest deviations are found in the cross-networks between Z500 and SST, which co-occur sometimes, but not always, with lower network quality scores in the unipartite networks. We combined the three network quality scores for each CMIP6 model on an equal basis, emphasising the equivalent importance of all considered geophysical subsystems in the generation of the earth's climate. Taking into account the uncertainty inherent in any reference, the average multivariate network quality score over several, preferably independent, references can certainly be considered a suitable criterion to assess the similarity of physical interactions between climate components in a model to those in observations. In addition, the proposed complex network framework combined with the distance correlation measure offers many promising multivariate extensions of δ-MAPS as, for example, node definition based on multivariate time series, consideration of higher-order dependence, interactions on multiple timescales, and time-evolving networks. Such comparisons could be very useful to investigate subtle differences between various reanalyses. Besides, the characterisation of network evolution from past to future could add a new facet to the understanding of climate change. CD developed the concept, processed the data, prepared the manuscript, and produced all figures; KW and AW contributed with in-depth discussions, interpretation, and review. The contact author has declared that neither of the authors has any competing interests. Publisher’s note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. We thank the editor and two anonymous referees for their insightful comments on the paper. Support for the Twentieth Century Reanalysis Project version 3 data set is provided by the US Department of Energy, Office of Science Biological and Environmental Research (BER); by the NOAA Climate Program Office; and by the NOAA Physical Sciences Laboratory. This paper was edited by Kira Rehfeld and reviewed by two anonymous referees. Agarwal, A., Maheswaran, R., Marwan, N., Caesar, L., and Kurths, J.: Wavelet-based multiscale similarity measure for complex networks, Eur. Phys. J. B, 91, 296, https://doi.org/10.1140/epjb/ e2018-90460-6, 2018.a Agarwal, A., Caesar, L., Marwan, N., Maheswaran, R., Merz, B., and Kurths, J.: Network-based identification and characterization of teleconnections on different scales, Sci. Rep., 9, 8808, https:// doi.org/10.1038/s41598-019-45423-5, 2019.a Barbosa, S. M. and Andersen, O. B.: Trend patterns in global sea surface temperature, Int. J. Climatol., 29, 2049–2055, https://doi.org/10.1002/joc.1855, 2009.a Battiston, F., Cencetti, G., Iacopini, I., Latora, V., M., L., Patania, A., Young, J.-G., and Petri, G.: Networks beyond pairwise interactions: Structure and dynamics, Phys. Rep., 874, 1–92, https:// doi.org/10.1016/j.physrep.2020.05.004, 2020.a Benjamini, Y.: Discovering the false discovery rate, J. R. Statist. Soc. B, 72, 405–416, https://doi.org/10.1111/j.1467-9868.2010.00746.x, 2010.a Bi, D., Dix, M., Marsland, S., O’Farrell, S., Sullivan, A., Bodman, R., Law, R., Harman, I., Srbinovsky, J., Rashid, H. A., Dobrohotoff, P., Mackallah, C., Yan, H., Hirst, A., Savita, A., Boeira Dias, F., Woodhouse, M., Fiedler, R., and Heerdegen, A.: Configuration and spin-up of ACCESS-CM2, the new generation Australian Community Climate and Earth System Simulator Coupled Model, J. South. Hemisphere Earth Syst. Sci., 70, 225–251, https://doi.org/10.1071/ES19040, 2020.a Böttcher, B., Keller-Ressel, M., and Schilling, R.: Distance multivariance: New dependence measures for random vectors, Ann. Stat., 47, 2757–2789, https://doi.org/10.1214/18-AOS1764, 2019.a, b, c Boucher, O., Servonnat, J., Albright, A. L., Aumont, O., Balkanski, Y., Bastrikov, V., Bekki, S., Bonnet, R., Bony, S., Bopp, L., Braconnot, P., Brockmann, P., Cadule, P., Caubel, A., Cheruy, F., Codron, F., Cozic, A., Cugnet, D., D'Andrea, F., Davini, P., de Lavergne, C., Denvil, S., Deshayes, J., Devilliers, M., Ducharne, A., Dufresne, J.-L., Dupont, E., Éthé, C., Fairhead, L., Falletti, L., Flavoni, S., Foujols, M.-A., Gardoll, S., Gastineau, G., Ghattas, J., Grandpeix, J.-Y., Guenet, B., Guez, L. E., Guilyardi, E., Guimberteau, M., Hauglustaine, D., Hourdin, F., Idelkadi, A., Joussaume, S., Kageyama, M., Khodri, M., Krinner, G., Lebas, N., Levavasseur, G., Lévy, C., Li, L., Lott, F., Lurton, T., Luyssaert, S., Madec, G., Madeleine, J.-B., Maignan, F., Marchand, M., Marti, O., Mellul, L., Meurdesoif, Y., Mignot, J., Musat, I., Ottlé, C., Peylin, P., Planton, Y., Polcher, J., Rio, C., Rochetin, N., Rousset, C., Sepulchre, P., Sima, A., Swingedouw, D., Thiéblemont, R., Traore, A. K., Vancoppenolle, M., Vial, J., Vialard, J., Viovy, N., and Vuichard, N.: Presentation and Evaluation of the IPSL-CM6A-LR Climate Model, J. Adv. Model. Earth Sy., 12, e2019MS002010, https://doi.org/10.1029/2019MS002010, 2020.a Brands, S.: A circulation-based performance atlas of the CMIP5 and 6 models for regional climate studies in the Northern Hemisphere mid-to-high latitudes, Geosci. Model Dev., 15, 1375–1411, https:// doi.org/10.5194/gmd-15-1375-2022, 2022.a, b Brunet, D., Vrscay, E. R., and Wang, Z.: A Class of Image Metrics Based on the Structural Similarity Quality Index, in: Image Analysis and Recognition, edited by: Kamel, M. and Campilho, A., vol. 6753 Part 1, 100–110, Springer, Berlin, Heidelberg, https://doi.org/10.1007/978-3-642-21593-3_11, 2011.a Brunner, L., Pendergrass, A. G., Lehner, F., Merrifield, A. L., Lorenz, R., and Knutti, R.: Reduced global warming from CMIP6 projections when weighting models by performance and independence, Earth Syst. Dynam., 11, 995–1012, https://doi.org/10.5194/esd-11-995-2020, 2020.a Cannon, A. J.: Reductions in daily continental-scale atmospheric circulation biases between generations of global climate models: CMIP5 to CMIP6, Environ. Res. Lett., 15, 064006, https://doi.org/ 10.1088/1748-9326/ab7e4f, 2020.a, b Cherchi, A., Fogli, P. G., Lovato, T., Peano, D., Iovino, D., Gualdi, S., Masina, S., Scoccimarro, E., Materia, S., Bellucci, A., and Navarra, A.: Global Mean Climate and Main Patterns of Variability in the CMCC-CM2 Coupled Model, J. Adv. Model. Earth Sy., 11, 185–209, https://doi.org/10.1029/2018MS001369, 2019.a, b Coburn, J. and Pryor, S. C.: Differential Credibility of Climate Modes in CMIP6, J. Climate, 34, 8145–8164, https://doi.org/10.1175/JCLI-D-21-0359.1, 2021.a Danabasoglu, G., Lamarque, J.-F., Bacmeister, J., Bailey, D. A., DuVivier, A. K., Edwards, J., Emmons, L. K., Fasullo, J., Garcia, R., Gettelman, A., Hannay, C., Holland, M. M., Large, W. G., Lauritzen, P. H., Lawrence, D. M., Lenaerts, J. T. M., Lindsay, K., Lipscomb, W. H., Mills, M. J., Neale, R., Oleson, K. W., Otto-Bliesner, B., Phillips, A. S., Sacks, W., Tilmes, S., van Kampenhout, L., Vertenstein, M., Bertini, A., Dennis, J., Deser, C., Fischer, C., Fox-Kemper, B., Kay, J. E., Kinnison, D., Kushner, P. J., Larson, V. E., Long, M. C., Mickelson, S., Moore, J. K., Nienhouse, E., Polvani, L., Rasch, P. J., and Strand, W. G.: The Community Earth System Model Version 2 (CESM2), J. Adv. Model. Earth Sy., 12, e2019MS001916, https://doi.org/10.1029/2019MS001916, 2020.a De Lathauwer, L., De Moor, B., and Vandewalle, J.: A Multilinear Singular Value Decomposition, SIAM J. Matrix Anal. A., 21, 1253–1278, https://doi.org/10.1137/S0895479896305696, 2000.a Deutsches Klimarechenzentrum: ESGF-Data, [data set], https://esgf-node.llnl.gov/search/cmip6/, last access: 6 December 2021.a Dijkstra, H. A., Hernández-García, E., Masoller, C., and Barreiro, M.: Networks in Climate, Cambridge University Press, Cambridge, https://doi.org/10.1017/9781316275757, 2019.a Donges, F. J., Zou, Y., Marwan, N., and Kurths, J.: Complex networks in climate dynamics: Comparing linear and nonlinear network construction methods, Eur. Phys. J. Spec. Top., 174, 157–179, https:// doi.org/10.1140/epjst/e2009-01098-2, 2009.a Donges, J., Schultz, H., Marwan, N., Zou, Y., and Kurths, J.: Investigating the topology of interacting networks: Theory and application to coupled climate subnetworks, Eur. Phys. J. B, 84, 635–651, https://doi.org/10.1140/epjb/e2011-10795-8, 2011.a Döscher, R., Acosta, M., Alessandri, A., Anthoni, P., Arsouze, T., Bergman, T., Bernardello, R., Boussetta, S., Caron, L.-P., Carver, G., Castrillo, M., Catalano, F., Cvijanovic, I., Davini, P., Dekker, E., Doblas-Reyes, F. J., Docquier, D., Echevarria, P., Fladrich, U., Fuentes-Franco, R., Gröger, M., v. Hardenberg, J., Hieronymus, J., Karami, M. P., Keskinen, J.-P., Koenigk, T., Makkonen, R., Massonnet, F., Ménégoz, M., Miller, P. A., Moreno-Chamarro, E., Nieradzik, L., van Noije, T., Nolan, P., O'Donnell, D., Ollinaho, P., van den Oord, G., Ortega, P., Prims, O. T., Ramos, A., Reerink, T., Rousset, C., Ruprich-Robert, Y., Le Sager, P., Schmith, T., Schrödner, R., Serva, F., Sicardi, V., Sloth Madsen, M., Smith, B., Tian, T., Tourigny, E., Uotila, P., Vancoppenolle, M., Wang, S., Wårlind, D., Willén, U., Wyser, K., Yang, S., Yepes-Arbós, X., and Zhang, Q.: The EC-Earth3 Earth system model for the Coupled Model Intercomparison Project 6, Geosci. Model Dev., 15, 2973–3020, https://doi.org/10.5194/gmd-15-2973-2022, 2022.a, b Duan, Y., Kumar, S., and Kinter, J. L.: Evaluation of Long-Term Temperature Trend and Variability in CMIP6 Multimodel Ensemble, Geophys. Res. Lett., 48, e2021GL093227, https://doi.org/10.1029/ 2021GL093227, 2021.a European Centre for Medium-Range Weather Forecasts: CERA-20C, [data set], https://www.ecmwf.int/en/forecasts/datasets/reanalysis-datasets/cera-20c, last access: 9 May 2020.a Ekhtiari, N., Ciemer, C., Kirsch, C., and Donner, R.: Coupled network analysis revealing global monthly scale co-variability patterns between sea-surface temperatures and precipitation in dependence on the ENSO state, Eur. Phys. J.-Spec. Top., 230, 3019–3032, https://doi.org/10.1140/epjs/s11734-021-00168-z, 2021.a, b Falasca, F.: delta-MAPS, [code], https://github.com/FabriFalasca/delta-MAPS, last access: 10 May 2020.a Falasca, F., Bracco, A., Nenes, A., and Fountalis, I.: Dimensionality reduction and network inference for climate data using δ-MAPS: Application to the CESM Large Ensemble sea surface temperature, J. Adv. Model. Earth Sy., 11, 1479–1515, https://doi.org/10.1029/2019MS001654, 2019.a, b, c, d, e, f, g Falasca, F., Crétat, J., and Braconnot, P. Bracco, A.: Spatiotemporal complexity and time-dependent networks in sea surface temperature from mid- to late Holocene, Eur. Phys. J. Plus, 135, 392, https://doi.org/10.1140/epjp/s13360-020-00403-x, 2020.a, b Fasullo, J. T., Phillips, A. S., and Deser, C.: Evaluation of Leading Modes of Climate Variability in the CMIP Archives, J. Climate, 33, 5527–5545, https://doi.org/10.1175/JCLI-D-19-1024.1, 2020.a Feng, A., Gong, Z., Wang, Q., and Feng, G.: Three-dimensional air–sea interactions investigated with bilayer networks, Theor. Appl. Climatol., 109, 635–643, https://doi.org/10.1007/s00704-012-0600-7, 2012.a, b Fisher, M. J.: Predictable Components in Australian Daily Temperature Data, J. Climate, 28, 5969–5984, https://doi.org/10.1175/JCLI-D-14-00713.1, 2015.a Fortunato, S. and Hric, D.: Community detection in networks: A user guide, Phys. Rep., 659, 1–44, https://doi.org/10.1016/j.physrep.2016.09.002, 2016.a Fountalis, I., Bracco, A., and Dovrolis, C.: ENSO in CMIP5 simulations: network connectivity from the recent past to the twenty-third century, Clim. Dynam., 45, 511–538, https://doi.org/10.1007/ s00382-014-2412-1, 2015.a Fountalis, I., Dovrolis, C., Bracco, A., Dilkina, B., and Keilholz, S.: δ-MAPS: from spatio-temporal data to a weighted and lagged network between functional domains, Appl. Netw. Sci., 3, 21, https:/ /doi.org/10.1007/s41109-018-0078-z, 2018.a, b, c, d, e, f, g, h, i, j, k Frankignoul, C., Gastineau, G., and Kwon, Y.-O.: Estimation of the SST Response to Anthropogenic and External Forcing and Its Impact on the Atlantic Multidecadal Oscillation and the Pacific Decadal Oscillation, J. Climate, 30, 9871–9895, https://doi.org/10.1175/JCLI-D-17-0009.1, 2017.a Fujiwara, M., Hibino, T., Mehta, S. K., Gray, L., Mitchell, D., and Anstey, J.: Global temperature response to the major volcanic eruptions in multiple reanalysis data sets, Atmos. Chem. Phys., 15, 13507–13518, https://doi.org/10.5194/acp-15-13507-2015, 2015.a Fulton, D. J. and Hegerl, G. C.: Testing Methods of Pattern Extraction for Climate Data Using Synthetic Modes, J. Climate, 34, 7645–7660, https://doi.org/10.1175/JCLI-D-20-0871.1, 2021.a, b Garreaud, R., Clem, K., and Veloso, J.: The south pacific pressure trend dipole and the southern blob, J. Climate, 34, 7661–7676, https://doi.org/10.1175/JCLI-D-20-0886.1, 2021.a Gillett, N. P., Fyfe, J. C., and Parker, D.: Attribution of observed sea level pressure trends to greenhouse gas, aerosol, and ozone changes, Geophys. Res. Lett., 40, 2302–2306, https://doi.org/ 10.1002/grl.50500, 2013.a Gutjahr, O., Putrasahan, D., Lohmann, K., Jungclaus, J. H., von Storch, J.-S., Brüggemann, N., Haak, H., and Stössel, A.: Max Planck Institute Earth System Model (MPI-ESM1.2) for the High-Resolution Model Intercomparison Project (HighResMIP), Geosci. Model Dev., 12, 3241–3281, https://doi.org/10.5194/gmd-12-3241-2019, 2019.a Hajima, T., Watanabe, M., Yamamoto, A., Tatebe, H., Noguchi, M. A., Abe, M., Ohgaito, R., Ito, A., Yamazaki, D., Okajima, H., Ito, A., Takata, K., Ogochi, K., Watanabe, S., and Kawamiya, M.: Development of the MIROC-ES2L Earth system model and the evaluation of biogeochemical processes and feedbacks, Geosci. Model Dev., 13, 2197–2244, https://doi.org/10.5194/gmd-13-2197-2020, 2020.a Hannachi, A.: Pattern hunting in climate: a new method for finding trends in gridded climate data, Int. J. Climatol., 27, 1–15, https://doi.org/10.1002/joc.1375, 2007.a, b, c Henley, B. J., Gergis, J., Karoly, D. J., Power, S., Kennedy, J., and Folland, C. K.: A Tripole Index for the Interdecadal Pacific Oscillation, Clim. Dynam., 45, 3077–3090, https://doi.org/10.1007/ s00382-015-2525-1, 2015.a, b Hlinka, J., Hartman, D., Vejmelka, M., Novotná, D., and Paluš, M.: Non-linear dependence and teleconnections in climate data: sources, relevance, nonstationarity, Clim. Dynam., 42, 1873–1886, https:/ /doi.org/10.1007/s00382-013-1780-2, 2014.a Hynčica, M. and Huth, R.: Modes of atmospheric circulation variability in the Northern Extratropics: A comparison of five reanalyses, J. Climate, 33, 10707–10726, https://doi.org/10.1175/ JCLI-D-19-0904.1, 2020.a, b Jajcay, N., Kravtsov, S., Sugihara, G., Tsonis, A. A., and Paluš, M.: Synchronization and causality across time scales in El Niño Southern Oscillation, npj Clim. Atmos. Sci., 1, 33, https://doi.org/ 10.1038/s41612-018-0043-7, 2018.a Fokianos, K. and Pitsillou, M.: Testing independence for multivariate time series via the auto-distance correlation matrix, Biometrika, 105, 337–352, https://doi.org/10.1093/biomet/asx082, 2018.a Kittel, T., Ciemer, C., Lotfi, N., Peron, T., Rodrigues, F., Kurths, J., and Donner, R.: Evolving climate network perspectives on global surface air temperature effects of ENSO and strong volcanic eruptions, Eur. Phys. J.-Spec. Top., 230, 3075–3100, https://doi.org/10.1140/epjs/s11734-021-00269-9, 2021.a Knutson, T. R. and Ploshay, J.: Sea Level Pressure Trends: Model-Based Assessment of Detection, Attribution, and Consistency with CMIP5 Historical Simulations, J. Climate, 34, 327–346, https:// doi.org/10.1175/JCLI-D-19-0997.1, 2021.a Kristóf, E., Barcza, Z., Hollós, R., Bartholy, J., and Pongrácz, R.: Evaluation of Historical CMIP5 GCM Simulation Results Based on Detected Atmospheric Teleconnections, Atmosphere, 11, 723, https:// doi.org/10.3390/atmos11070723, 2020.a Laloyaux, P., Balmaseda, M., Bidlot, J.-R., Broennimann, S., Buizza, R., Boisseson, E., Dalhgren, P., Dee, D., Haimberger, L., Hersbach, H., Kosaka, Y., Martin, M., Poli, P., Rayner, N., Rustemeier, E., and Schepers, D.: CERA-20C: A coupled reanalysis of the twentieth century, J. Adv. Model. Earth Sy., 10, 1172–1195, https://doi.org/10.1029/2018MS001273, 2018.a, b Lancaster, H. O.: The Chi-squared Distribution, Wiley & Sons, Inc., New York, ISBN 9780471512301, 1969.a Lee, J., Sperber, K., Gleckler, P., Bonfils, C., and Taylor, K.: Quantifying the agreement between observed and simulated extratropical modes of interannual variability, Clim. Dynam., 52, 4057–4089, https://doi.org/10.1007/s00382-018-4355-4, 2019.a, b Lee, W.-L., Wang, Y.-C., Shiu, C.-J., Tsai, I., Tu, C.-Y., Lan, Y.-Y., Chen, J.-P., Pan, H.-L., and Hsu, H.-H.: Taiwan Earth System Model Version 1: description and evaluation of mean state, Geosci. Model Dev., 13, 3887–3904, https://doi.org/10.5194/gmd-13-3887-2020, 2020.a Li, G., Ren, B., Zheng, J., and Yang, C.: Trend Singular Value Decomposition Analysis and Its Application to the Global Ocean Surface Latent Heat Flux and SST Anomalies, J. Climate, 24, 2931–2948, https://doi.org/10.1175/2010JCLI3743.1, 2011.a Liu, Z. and Alexander, M.: Atmospheric bridge, oceanic tunnel, and global climatic teleconnections, Rev. Geophys., 45, RG2005, https://doi.org/10.1029/2005RG000172, 2007.a, b, c Meegan Kumar, D., Tierney, J. E., Bhattacharya, T., Zhu, J., McCarty, L., and Murray, J. W.: Climatic drivers of deglacial SST variability in the eastern Pacific, Paleoceanogr. Paleoclimatol., 36, e2021PA004264, https://doi.org/10.1029/2021PA004264, 2021.a Messié, M. and Chavez, F.: Global Modes of Sea Surface Temperature Variability in Relation to Regional Climate Indices, J. Climate, 24, 4314–4331, https://doi.org/10.1175/2011JCLI3941.1, 2011.a Mo, R., Ye, C., and Whitfield, P. H.: Application Potential of Four Nontraditional Similarity Metrics in Hydrometeorology, J. Hydrometeorol., 15, 1862–1880, https://doi.org/10.1175/JHM-D-13-0140.1, Monahan, A. H., Fyfe, J. C., Ambaum, M. H. P., B., S. D., and North, G. R.: Empirical Orthogonal Functions: The Medium is the Message, J. Climate, 22, 6501–6514, https://doi.org/10.1175/ 2009JCLI3062.1, 2009.a Müller, W. A., Jungclaus, J. H., Mauritsen, T., Baehr, J., Bittner, M., Budich, R., Esch, F. B. M., Ghosh, R., Haak, H., Ilyina, T., Kleine, T., Kornblueh, L., Li, H., Modali, K., Notz, D., Pohlmann, H., Roeckner, E., Stemmler, I., Tian, F., and Marotzke, J.: A Higher-resolution Version of the Max Planck Institute Earth System Model (MPI-ESM1.2-HR), J. Adv. Model. Earth Sy., 10, 1383–1413, https: //doi.org/10.1029/2017MS001217, 2018.a NOAA Physics Science Laboratory: 20CRv3, [data set], https://psl.noaa.gov/data/gridded/data.20thC_ReanV3.html, last access: 21 April 2021.a Novi, L., Bracco, A., and Falasca, F.: Uncovering marine connectivity through sea surface temperature, Sci. Rep., 11, 8839, https://doi.org/10.1038/s41598-021-87711-z, 2021.a Nowack, P., Runge, J., Eyring, V., and Haigh, J. D.: Causal networks for climate model evaluation and constrained projections, Nat. Commun., 11, 1415, https://doi.org/10.1038/s41467-020-15195-y, 2020.a, b Papalexiou, S. M., Rajulapati, C. R., Clark, M. P., and F., L.: Robustness of CMIP6 Historical Global Mean Temperature Simulations: Trends, Long-Term Persistence, Autocorrelation, and Distributional Shape, Earth's Future, 8, e2020EF001667, https://doi.org/10.1029/2020EF001667, 2020.a, b Pires, C. A. L. and Hannachi, A.: Independent subspace analysis of the sea surface temperature variability: Non-Gaussian sources and sensitivity to sampling and dimensionality, Complexity, 3076810, https://doi.org/10.1155/2017/3076810, 2017.a, b Pires, C. A. L. and Hannachi, A.: Bispectral analysis of nonlinear interaction, predictability and stochastic modelling with application to ENSO, Tellus A, 73, 1–30, https://doi.org/10.1080/ 16000870.2020.1866393, 2021.a Raible, C., Stocker, T., Yoshimori, M., Renold, M., Beyerle, U., Casty, C., and Luterbacher, J.: Northern Hemispheric trends of pressure indices and atmospheric circulation patterns in observations, reconstructions, and coupled GCM simulations, J. Climate, 18, 3968–3982, https://doi.org/10.1175/JCLI3511.1, 2005.a Roberts, M. J., Baker, A., Blockley, E. W., Calvert, D., Coward, A., Hewitt, H. T., Jackson, L. C., Kuhlbrodt, T., Mathiot, P., Roberts, C. D., Schiemann, R., Seddon, J., Vannière, B., and Vidale, P. L.: Description of the resolution hierarchy of the global coupled HadGEM3-GC3.1 model as used in CMIP6 HighResMIP experiments, Geosci. Model Dev., 12, 4999–5028, https://doi.org/10.5194/ gmd-12-4999-2019, 2019.a Rodríguez-Fonseca, B., Polo, I., García-Serrano, J., Losada, T., Mohino, E., Mechoso, C. R., and Kucharski, F.: Are Atlantic Ninños enhancing Pacific ENSO events in recent decades?, Geophys. Res. Lett., 36, L20705, https://doi.org/10.1029/2009GL040048, 2009.a Sanderson, B. M., Knutti, R., and Caldwell, P.: A Representative Democracy to Reduce Interdependency in a Multimodel Ensemble, J. Climate, 28, 5171–5194, https://doi.org/10.1175/JCLI-D-14-00362.1, Séférian, R., Nabat, P., Michou, M., Saint-Martin, D., Voldoire, A., Colin, J., Decharme, B., Delire, C., Berthet, S., Chevallier, M., Sénési, S., Franchisteguy, L., Vial, J., Mallet, M., Joetzjer, E., Geoffroy, O., Guérémy, J.-F., Moine, M.-P., Msadek, R., Ribes, A., Rocher, M., Roehrig, R., Salas-y Mélia, D., Sanchez, E., Terray, L., Valcke, S., Waldman, R., Aumont, O., Bopp, L., Deshayes, J., Éthé, C., and Madec, G.: Evaluation of CNRM Earth System Model, CNRM-ESM2-1: Role of Earth System Processes in Present-Day and Future Climate, J. Adv. Model. Earth Sy., 11, 4182–4227, https:// doi.org/10.1029/2019MS001791, 2019.a Seland, Ø., Bentsen, M., Olivié, D., Toniazzo, T., Gjermundsen, A., Graff, L. S., Debernard, J. B., Gupta, A. K., He, Y.-C., Kirkevåg, A., Schwinger, J., Tjiputra, J., Aas, K. S., Bethke, I., Fan, Y., Griesfeller, J., Grini, A., Guo, C., Ilicak, M., Karset, I. H. H., Landgren, O., Liakka, J., Moseid, K. O., Nummelin, A., Spensberger, C., Tang, H., Zhang, Z., Heinze, C., Iversen, T., and Schulz, M.: Overview of the Norwegian Earth System Model (NorESM2) and key climate response of CMIP6 DECK, historical, and scenario simulations, Geosci. Model Dev., 13, 6165–6200, https://doi.org/ 10.5194/gmd-13-6165-2020, 2020.a, b Sellar, A. A., Jones, C. G., Mulcahy, J. P., Tang, Y., Yool, A., Wiltshire, A., O'Connor, F. M., Stringer, M., Hill, R., Palmieri, J., Woodward, S., de Mora, L., Kuhlbrodt, T., Rumbold, S. T., Kelley, D. I., Ellis, R., Johnson, C. E., Walton, J., Abraham, N. L., Andrews, M. B., Andrews, T., Archibald, A. T., Berthou, S., Burke, E., Blockley, E., Carslaw, K., Dalvi, M., Edwards, J., Folberth, G. A., Gedney, N., Griffiths, P. T., Harper, A. B., Hendry, M. A., Hewitt, A. J., Johnson, B., Jones, A., Jones, C. D., Keeble, J., Liddicoat, S., Morgenstern, O., Parker, R. J., Predoi, V., Robertson, E., Siahaan, A., Smith, R. S., Swaminathan, R., Woodhouse, M. T., Zeng, G., and Zerroukat, M.: UKESM1: Description and Evaluation of the U.K. Earth System Model, J. Adv. Model. Earth Sy., 11, 4513–4558, https://doi.org/10.1029/2019MS001739, 2019.a Shen, C., Panda, S., and Vogelstein, J.: The Chi-Square Test of Distance Correlation, J. Comput. Graph. Stat., 31, 254–262, https://doi.org/10.1080/10618600.2021.1938585, 2022.a Simpson, I. R., Bacmeister, J., Neale, R. B., Hannay, C., Gettelman, A., Garcia, R. R., Lauritzen, P. H., Marsh, D. R., Mills, M. J., Medeiros, B., and Richter, J. B.: An Evaluation of the Large‐Scale Atmospheric Circulation and Its Variability in CESM2 and Other CMIP Models, J. Geophys. Res.-Atmos., 125, e2020JD032835, https://doi.org/10.1029/2020JD032835, 2020.a Slivinski, L. C., Compo, G. P., Whitaker, J. S., Sardeshmukh, P. D., Giese, B. S., McColl, C., Allan, R., Yin, X., Vose, R., Titchner, H., Kennedy, J., Spencer, L. J., Ashcroft, L., Brönnimann, S., Brunet, M., Camuffo, D., Cornes, R., Cram, T. A., Crouthamel, R., Domínguez-Castro, F., Freeman, J. E., Gergis, J., Hawkins, E., Jones, P. D., Jourdain, S., Kaplan, A., Kubota, H., Le Blancq, F., Lee, T.-C., Lorrey, A., Luterbacher, J., Maugeri, M., Mock, C. J., Moore, G. W. K., Przybylak, R., Pudmenzky, C., Reason, C. nd Slonosky, V. C., Smith, C. A., Tinz, B., Trewin, B., Valente, M. A., Wang, X. L., Wilkinson, C., Wood, K., and Wyszyński, P.: Towards a more reliable historical reanalysis: Improvements for version 3 of the Twentieth Century Reanalysis system, Q. J. Roy. Meteor. Soc., 145, 2876–2908, https://doi.org/10.1002/qj.3598, 2019.a, b Spensberger, C., Reeder, M. J., Spengler, T., and Patterson, M.: The Connection between the Southern Annular Mode and a Feature-Based Perspective on Southern Hemisphere Midlatitude Winter Variability, J. Climate, 33, 115–129, https://doi.org/10.1175/JCLI-D-19-0224.1, 2020.a Steinhäuser, K. and Tsonis, A.: A climate model intercomparison at the dynamics level, Clim. Dynam., 42, 1665–1670, https://doi.org/10.1007/s00382-013-1761-5, 2014.a, b Steinhäuser, K., Chawla, N. V., and Ganguly, A. R.: An Exploration of Climate Data Using Complex Networks, in: Proceedings of the Third International Workshop on Knowledge Discovery from Sensor Data, SensorKDD'09, 23–31, Association for Computing Machinery, Paris, https://doi.org/10.1145/1601966.1601973, 2009.a Steinhäuser, K., Ganguly, A. R., and Chawla, N. V.: Multivariate and multiscale dependence in the global climate system revealed through complex networks, Clim. Dynam., 39, 889–895, https://doi.org/ 10.1007/s00382-011-1135-9, 2012.a Streitberg, B.: Lancaster Interactions Revisited, Ann. Stat., 18, 1878–1885, https://doi.org/10.1214/aos/1176347885, 1990.a, b Swart, N. C., Cole, J. N. S., Kharin, V. V., Lazare, M., Scinocca, J. F., Gillett, N. P., Anstey, J., Arora, V., Christian, J. R., Hanna, S., Jiao, Y., Lee, W. G., Majaess, F., Saenko, O. A., Seiler, C., Seinen, C., Shao, A., Sigmond, M., Solheim, L., von Salzen, K., Yang, D., and Winter, B.: The Canadian Earth System Model version 5 (CanESM5.0.3), Geosci. Model Dev., 12, 4823–4873, https:// doi.org/10.5194/gmd-12-4823-2019, 2019.a Székely, G. J. and Rizzo, M. L.: Partial distance correlation with methods for dissimilarities, Ann. Stat., 42, 2382–2412, https://doi.org/10.1214/14-AOS1255, 2014.a Székely, G. J., Rizzo, M. L., and Bakirov, N. K.: Measuring and Testing Dependence by Correlation of Distances, Ann. Stat., 35, 2769–2794, https://doi.org/10.1214/009053607000000505, 2007.a Tantet, A. and Dijkstra, H. A.: An interaction network perspective on the relation between patterns of sea surface temperature variability and global mean surface temperature, Earth Syst. Dynam., 5, 1–14, https://doi.org/10.5194/esd-5-1-2014, 2014.a Tatebe, H., Ogura, T., Nitta, T., Komuro, Y., Ogochi, K., Takemura, T., Sudo, K., Sekiguchi, M., Abe, M., Saito, F., Chikira, M., Watanabe, S., Mori, M., Hirota, N., Kawatani, Y., Mochizuki, T., Yoshimura, K., Takata, K., O'ishi, R., Yamazaki, D., Suzuki, T., Kurogi, M., Kataoka, T., Watanabe, M., and Kimoto, M.: Description and basic evaluation of simulated mean state, internal variability, and climate sensitivity in MIROC6, Geosci. Model Dev., 12, 2727–2765, https://doi.org/10.5194/gmd-12-2727-2019, 2019.a Tsonis, A. A., Swanson, K. L., and Wang, G.: On the Role of Atmospheric Teleconnections in Climate, J. Climate, 21, 2990–3001, https://doi.org/10.1175/2007JCLI1907.1, 2008.a Tsonis, A. A., Wang, G., Swanson, K. L., Rodrigues, F. A., and da Fontura Costa, L.: Community structure and dynamics in climate networks, Clim. Dynam., 37, 933–940, https://doi.org/10.1007/ s00382-010-0874-3, 2011.a Vázquez-Patiño, A., Campozano, L., Mendoza, D., and Samaniego, E.: A causal flow approach for the evaluation of global climate models, Int. J. Climatol., 40, 4497–4517, https://doi.org/10.1002/ joc.6470, 2019.a Voldoire, A., Saint‐Martin, D., Sénési, S., Decharme, B., Alias, A., Chevallier, M., Colin, J., Guérémy, J., Michou, M., Moine, M., Nabat, P., Roehrig, R., Salas y Mélia, D., Séférian, R., Valcke, S., Beau, I., Belamari, S., Berthet, S., Cassou, C., Cattiaux, J., Deshayes, J., Douville, H., Ethé, C., Franchistéguy, L., Geoffroy, O., Lévy, C., Madec, G., Meurdesoif, Y., Msadek, R., Ribes, A., Sanchez‐Gomez, E., Terray, L., and Waldman, R.: Evaluation of CMIP6 DECK Experiments With CNRM‐CM6-1, J. Adv. Model. Earth Sy., 11, 2177–2213, https://doi.org/10.1029/2019MS001683, 2019.a Wang, B. and An, S.-I.: A method for detecting season-dependent modes of climate variability: S-EOF analysis, Geophys. Res. Lett., 32, L15710, https://doi.org/10.1029/2005GL022709, 2005.a Wang, Z., Bovik, A. C., Sheikh, H. R., and Simoncelli, E. P.: Image quality assessment: From error visibility to structural similarity, IEEE T. Image Process., 13, 600–612, https://doi.org/10.1109/ TIP.2003.819861, 2004.a Wiedermann, M., Donges, J. F., Handorf, D., Kurths, J., and Donner, R. V.: Hierarchical structures in Northern Hemispheric extratropical winter ocean–atmosphere interactions, Int. J. Climatol., 37, 3821–3836, https://doi.org/10.1002/joc.4956, 2017.a Wu, T., Lu, Y., Fang, Y., Xin, X., Li, L., Li, W., Jie, W., Zhang, J., Liu, Y., Zhang, L., Zhang, F., Zhang, Y., Wu, F., Li, J., Chu, M., Wang, Z., Shi, X., Liu, X., Wei, M., Huang, A., Zhang, Y., and Liu, X.: The Beijing Climate Center Climate System Model (BCC-CSM): the main progress from CMIP5 to CMIP6, Geosci. Model Dev., 12, 1573–1600, https://doi.org/10.5194/gmd-12-1573-2019, 2019. a Yang, P., Wang, G., Xiao, Z., Tsonis, A. A., Feng, G., Liu, S., and Zhou, X.: Climate: a dynamical system with mismatched space and time domains, Clim. Dynam., 56, 3305–3311, https://doi.org/10.1007/ s00382-021-05646-7, 2021.a Yeo, S.-R., Yeh, S.-W., Kim, K.-Y., and Kim, W.-M.: The role of low-frequency variation in the manifestation of warming trend and ENSO amplitude, Clim. Dynam., 49, 1197–1213, https://doi.org/10.1007/ s00382-016-3376-0, 2017.a Yukimoto, S., Kawai, H., Koshiro, T., Oshima, N., Yoshida, K., Urakawa, S., Tsujino, H., Deushi, M., Tanaka, T., Hosaka, M., Yabu, S., Yoshimura, H., Shindo, E., Mizuta, R., Obata, A., Adachi, Y., and Ishii, M.: The Meteorological Research Institute Earth System Model Version 2.0, MRI-ESM2.0: Description and Basic Evaluation of the Physical Component, J. Meteorol. Soc. Jpn., 97, 931–965, https://doi.org/10.2151/jmsj.2019-051, 2019.a Zhang, M.-Z., Xu, Z., Han, Y., and Guo, W.: An improved multivariable integrated evaluation method and tool (MVIETool) v1.0 for multimodel intercomparison, Geosci. Model Dev., 14, 3079–3094, https:// doi.org/10.5194/gmd-14-3079-2021, 2021.a Zhu, X., Dong, W., Wei, Z., Guo, Y., Gao, X., Wen, X., Yang, S., Zheng, Z., Yan, D., Zhu, Y., and Chen, J.: Multi-decadal evolution characteristics of global surface temperature anomaly data shown by observation and CMIP5 models, Int. J. Climatol., 38, 1533–1542, https://doi.org/10.1002/joc.5264, 2018.a Ziehn, T., Chamberlain, M. A., Law, R. M., Lenton, A., Bodman, R. W., Dix, M., Stevens, L., Wang, Y.-P., and Srbinovsky, J.: The Australian Earth System Model: ACCESS-ESM1.5, J. South. Hemisphere Earth Syst. Sci., 70, 193–214, https://doi.org/10.1071/ES19035, 2020.a
{"url":"https://esd.copernicus.org/articles/14/17/2023/","timestamp":"2024-11-08T13:57:15Z","content_type":"text/html","content_length":"451164","record_id":"<urn:uuid:b54f8ddc-7cd1-4734-ab1e-aa349712dc89>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00493.warc.gz"}
ISEE Middle Level Quantitative Help Students in need of ISEE Middle Level Quantitative help will benefit greatly from our interactive syllabus. We break down all of the key elements so you can get adequate ISEE Middle Level Quantitative help. With the imperative study concepts and relevant practice questions right at your fingertips, you’ll have plenty of ISEE Middle Level Quantitative help in no time. Get help today with our extensive collection of essential ISEE Middle Level Quantitative information. If your child is a fourth- or fifth-grader preparing to attend an independent or a magnet school next year, you probably already know that he will be required to take the Middle Level Independent School Entrance Examination. This test is typically used to determine admissions, scholarship qualifications, and proper course placement for your child. The Middle Level ISEE includes content regarding verbal reasoning, quantitative reasoning, reading comprehension, and proficiency in mathematics. The quantitative reasoning portion of the ISEE contains 37 questions to be completed in only 35 minutes, so adequate preparation is the key to your child’s success. Learn by Concept provides a free interactive online Middle Level Quantitative ISEE study guide to help with test preparation. It’s set up like an interactive syllabus, enabling your child to quickly assess their strengths and weaknesses. This helps your child to focus on the areas in which they need improvement. When using Learn by Concept to review for the Middle Level ISEE Quantitative section, your child has the ability to choose to practice algebraic concepts, geometry, numbers and operations, square roots, and more. Inside each of these subjects are two to three more layers of subtopics from which to choose. For instance, within the numbers and operations topic, your child could choose the subtopic regarding fractions, and within the subtopic regarding fractions, your child could choose to review how to add, subtract, multiply, or divide fractions, or how to find the decimal equivalent of fractions. Learn by Concept’s Middle Level ISEE Quantitative practice questions are presented in a multiple-choice format that simulates the actual exam. Below the possible solutions, your child will find the correct answer, accompanied by a detailed explanation. Along with the explanation, your child will find links to relevant formulas and concepts. The multiple-choice answers are randomly shuffled each time your child refreshes the page. This reassignment strategy reinforces your child’s review of the content by avoiding useless memorization and association with the correct answer based on the order in which the choices are presented. The variety of additional Learning Tools available to help your child in their Middle Level ISEE Quantitative review include online Flashcards, a Question of the Day, and Practice Tests. Online Flashcards assist in studying for the quantitative reasoning portion of the Middle Level ISEE by strengthening memorization skills and allowing your student to use their study time wisely. The Question of the Day presents your child with a randomly selected question each day to keep the priority of studying for the ISEE at the top of their daily priorities. The Practice Tests are arranged in a similar way as Learn by Concept with concept-specific subtopics. However, the Practice Tests provide a timed and scored testing experience. The full-length Middle Level ISEE Quantitative Practice Tests offer a more comprehensive testing experience. All of Varsity Tutors’ Learning Tools are meant to be used together to give your child the opportunity to enhance their review with well-rounded study materials. Certified Tutor Emory University, Bachelor of Science, Mathematics/Economics. Certified Tutor Oklahoma Wesleyan University, Bachelor of Science, Business Teacher Education. University of Phoenix-Oklahoma City Campus, Ma... Certified Tutor Portland State University, Bachelors, Economics. Portland State University, Masters, Environmental and Resource Management. All ISEE Middle Level Quantitative Resources
{"url":"https://cdn.varsitytutors.com/isee_middle_level_quantitative-help","timestamp":"2024-11-03T21:44:00Z","content_type":"application/xhtml+xml","content_length":"157822","record_id":"<urn:uuid:39a0ee77-587b-4d07-93d4-e1142b0543ba>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00461.warc.gz"}
Percentage Change Calculator Last updated: Percentage Change Calculator The percentage change calculator determines the percentage change between two values. It is particularly useful in many aspects of finance, chemistry, and exponential growth and decay, as well as in other areas of mathematics. First, we need to know how to calculate percent change and to understand and use the percent change formula. To do this, we will provide you with many examples, each with an in-depth analysis of various mathematical challenges and traps waiting for beginners. Furthermore, we will teach you how to calculate percentage change when finding the population growth rate, a fundamental statistic parameter describing processes happening in a particular population. We are sure that after reading the whole text, the percentage change formula will stay in your head for a long time, and you will be able to find the percent change in any situation. 🔎 If you want to compute the percentage change between percentage points, check our percentage point calculator. What is percentage change? Percent change differs from percent increase and percent decrease in the sense that we can see both directions of the change. For example, the percent increase calculator calculates the amount of increase, in which we would say, "x percent increase". The percent decrease calculator calculates the amount of decline, in which we would say, "x percent decrease". The percent change calculator would yield a result in which we would say, "x percent increase or decrease". 🙋 We can also use percent change to express the relative error between the observed and true values in any measurement. To learn how to do that, check our percent error calculator. Let's explain in more detail how to calculate percent change. How do I calculate the percent change? To calculate percent change, we need to: 1. Take the difference between the starting value and the final value. 2. Divide by the absolute value of the starting value. 3. Multiply the result by 100. 4. Or use Omni's percent change calculator! 🙂 As you can see, it's not hard to calculate percent change. If you're interested in the mathematical formula for percentage change, we invite you to read the next section. Percent change formula The percent change formula is as follows: $\footnotesize \rm \%\ change = 100 \times \frac{(final - initial)}{|initial|}$ The two straight lines surrounding a number or expression (in this case, $\rm initial$) indicate the absolute value, or modulus. It means that if the value inside the straight lines is negative, we have to turn it into a positive one. The easiest way to do this is by erasing the minus before it. If the value inside the straight lines is positive, we don't need to do anything; it stays positive. After the absolute value is found, we can erase the straight lines or turn them into a bracket, as they may serve this function as well. If you are asking how to calculate the percent difference, you should check the difference percent calculator. But if you are only looking for the difference between the initial and final values, this percent change calculator will help you. The general percentage formula for one quantity in terms of another is multiplying the ratio of the two quantities by 100. The percentage change calculator is not only useful in a classroom setting but also in everyday applications. The amount of sales tax on an item represents a percent change, as does the tip added to the bill at a restaurant. Calculating the percentage change may come in handy when negotiating a new salary or assessing whether your child's height has increased appropriately. As you can see, knowing how to calculate percent change by hand using the percent change formula may be useful in the real world. Examples of calculating percentage change Let's do a few examples together to get a good grasp on how to find a percent change. In the first case, let's suppose that you have a change in value from 60 to 72, and you want to know the percent change. 1. Firstly, you need to input 60 as the original value and 72 as the new value into the formula. 2. Secondly, you have to subtract 60 from 72. As a result, you get 12. 3. Next, you should get the absolute value of 60. As 60 is a positive number, you don't need to do anything. You can erase the straight lines surrounding 60. 4. Now, you can divide 12 by 60. After this division, you get 0.2. 5. The last thing to do is to multiply the 0.2 by 100. As a result, you get 20%. The whole calculations look like this: [(72 – 60) / |60|] × 100 = (12 / |60|) × 100 = (12 / 60) × 100 = 0.2 × 100 = 20% 6. You can check your result using the percentage change calculator. Is everything alright? In the second example, let's deal with a slightly different example and calculate the percent change in value from 50 to -22. 1. Set 50 as the original value and -22 as the new value. 2. Then, you need to perform a subtraction. The difference between -22 and 50 is -72. Remember always to subtract the original value from the new value! 3. Next, you are obliged to get the absolute of 50. As the original value in this example is also a positive number, then you can just erase the straight lines. 4. It is time to perform the division. -72 divided by the 50 equals -1.44. 5. Finally, you have to multiply the result by 100. Let's see. -1.44 times 100 is -144%. The whole process should look like this: [(-22 – 50) / |50|] × 100 = (-72 / |50|) × 100 = (-72 / 50) × 100 = -1.44 × 100 = -144% 6. Remember that you can always check the result with the percent change calculator. As you may have already observed, the final result will be negative when the new value is smaller than the original one. Thus, you need to put a minus before it. On the other hand, if the new value is bigger than the original value, the result will be positive. You can use this to predict the outcome and check your answer. How to find the percentage change between negative numbers? Let's calculate together the percent change from -10 to -25: 1. Subtract the original value from the new one. -25 reduced by -10 is -15. 2. Compute the absolute value of the original value. As -10 is negative, you have to erase the minus before it, thus creating a positive value of 10. 3. Now, let's divide -15 by 10 that you got from the last step. -15 divided by 10 is -1.5. 4. You can finish your calculation by multiplying -1.5 by 100. The final outcome is -150%. The full equation should look like this: [(-25 – (-10)) / |-10|] × 100 = (-15 / |-10|) × 100 = (-15 / 10) × 100 = -1.5 × 100 = -150% 5. As always, we encourage you to check this result with Omni's percentage change calculator. If you had used a negative instead of a positive for the absolute value in this example, then -15 would have been divided by -10, giving you 1.5 as a result. It is a positive number, and your final answer would have been 150%. Your error would have been the difference between -1.5 and 1.5. This difference equals 3, so our calculation would have ended with 300% of an error (3 × 100% = 300%)! This is why you have to be careful when solving mathematical problems. A small mistake in one place may result in an enormous error in another. We have a task for you! Calculate, using the methods we have described previously, what is the percentage change between -20 and -30. Concentrate and watch out for mathematical traps that are waiting for you. But don't fear. By this point, you should know everything that is required to do it correctly. Remember to check your result using the percent change calculator. Population growth rate formula Population growth is the increase in the number of individuals in a certain population. It can be a population of people but also cows, foxes, or even flies. Members of any species can create a population. The population may be limited to a particular territory or country or expand to the whole world. You may count the number of dogs in your neighborhood, thus determining the population of dogs in the area surrounding your home. If you count their number after one year and compare it with the previous one, you will obtain their population growth. We can calculate it using this formula: current population – previous population = population growth When the population growth is higher than zero, the population is increasing, and the number of individuals is getting bigger each year. However, when the population growth is negative (with a value below zero), the population becomes smaller. A population growth of 0 means that the population size is not changing at all. Let us see how to find the rate of change or the growth rate of the population. Just divide the population growth by the number of individuals in the previous population and times by 100 to get the population growth rate. It is a measure of population growth compared to the number of individuals forming the population in the previous period. Mathematically, it looks like this: (population growth / previous population) × 100 = population growth rate Combined, we can write the whole formula as: (current population – previous population) / previous population) × 100 = population growth rate Notice that although it looks very similar to the formula for percentage change, you don’t need to get the absolute value of the previous population. It is because the population can never drop below zero nor have a negative value. Population growth and population growth rate can, however, be negative, representing the decreasing number of individuals. What is the difference between population growth and the population growth rate? Both of these parameters are ways of illustrating the change in the size of the population. Population growth is more direct and precise, as it shows us the exact difference between population size in two periods. However, the population growth rate also has its advantages. It emphasizes the dynamics of the process. It tells us how big the change is compared to the previous state of the population. A population growth of 20 may seem small, but if the original population was 10, then it means that the population size has tripled. The population growth rate shows it to us. In this case, its value would be 200%. How to compute population growth rate? Let's go together through an example to see how to find the population growth rate. In 1990 in the United States, there were 253,339,000 citizens. In 2010 it reached 310,384,000 people. 1. Let's calculate the population growth. You have to subtract the number of US citizens in 1990 from the number of citizens in 2010: 310,384,000 - 253,339,000 = 57,045,000 2. Now, you can calculate the population growth rate. To do that, you need to divide the population growth by the number of citizens in the earlier period (in this case, it's 1990): 57,045,000 / 253,339,000 = 0.225 3. The last thing to do is multiply the acquired value by 100 to get the percent: 0.225 × 100% = 22.5% 4. After these calculations, you can say that the US population increased by 22.5% between the years 1990 and 2010. Congratulations! You don't have to perform all the calculations by hand. Keep in mind that our percentage change calculator is always waiting for you at Omni Calculator! There is yet another situation in which you may want to use the percentage change calculator. If you have some spare money that you want to invest, you will have to choose between many investment offers. By comparing the percent changes of different investment options, you will see which is the optimal one. Is percentage difference equal to percentage change? No, percentage difference and percentage change are two different notions. In percentage change, the point of reference is one of the numbers in question, while in percentage difference, we take the average of these two numbers as the point of reference. Moreover, percentage change can be positive or negative, while the percentage difference is always positive (it has no direction). What is the percentage change from 5 to 20? 20 is a 300% increase of 5. Indeed, we have (20 - 5) / 5 = 3 and 3 × 100% = 300%, as claimed. What is the percentage change from 20 to 10? 10 is a 50% decrease of 20. Indeed, we have (10 - 20) / 20 = -0.5 and -0.5 × 100% = -50%, which corresponds to a 50% decrease. What is the percentage change from 2 to 3? 3 is a 50% increase of 2. Indeed, we have (3 - 2) / 2 = 0.5 and 0.5 × 100% = 50%, as we've claimed. What is the percentage change from 5 to 4? 4 is a 20% decrease of 5. Indeed, we have (4 - 5) / 5 = -0.2 and -0.2 × 100% = -20%, which corresponds to a 20% decrease, as claimed.
{"url":"https://www.omnicalculator.com/math/percentage-change","timestamp":"2024-11-12T15:14:02Z","content_type":"text/html","content_length":"447280","record_id":"<urn:uuid:3656c564-0837-4a66-80f0-41ef11de89dd>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00702.warc.gz"}
Constraints on a general 3-generation neutrino mass matrix from neutrino data: Application to the MSSM with R-parity violation We consider a general symmetric (3×3) mass matrix for three generations of neutrinos. Imposing the constraints, from the atmospheric neutrino and solar neutrino anomalies as well as from the CHOOZ experiment, on the mass squared differences and on the mixing angles, we identify the ranges of allowed inputs for the 6 matrix elements. We apply our results to Majorana left-handed neutrino masses generated at tree level and through fermion-sfermion loop diagrams in the MSSM with R-parity violation. The present experimental results on neutrinos from laboratories, cosmology and astrophysics are implemented to either put bounds on trilinear ( λ[ijk],λ′[ijk] ) and bilinear ( μ[e,μ,τ] ) R-parity-violating couplings or constrain combinations of products of these couplings. ASJC Scopus subject areas • Nuclear and High Energy Physics Dive into the research topics of 'Constraints on a general 3-generation neutrino mass matrix from neutrino data: Application to the MSSM with R-parity violation'. Together they form a unique
{"url":"https://nyuscholars.nyu.edu/en/publications/constraints-on-a-general-3-generation-neutrino-mass-matrix-from-n","timestamp":"2024-11-07T22:50:17Z","content_type":"text/html","content_length":"53202","record_id":"<urn:uuid:8f4e3c08-80ed-4c1a-a577-3b8a250cf040>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00228.warc.gz"}
Lesson Plan: KS3 Maths – Teach Angles with Clocks | Maths and Science | Teach Secondary Students often approach the topic of angles by memorising lots of vocabulary (acute, obtuse, right angle, etc.) and ‘facts’ about angles around a point, on a straight line, inside and outside a triangle and other polygons and between a transversal and parallel lines. Then they have to choose the appropriate rule to get the right answer to each calculation. In this lesson, a clock face is used to stimulate some calculations involving angles. First, KS3 students are invited to estimate the size of the angle and then to calculate it exactly. Initially, some of the calculations are quite easy, while others are more complicated as the lesson progresses. Provided that students know that there are 360° in a whole turn, they should be able to work out everything else for themselves. Calculating angles often involves following rules based on ‘angle facts’ that are poorly understood. In this lesson, students are encouraged to use their intuition to estimate and then calculate the sizes of angles on a clock face. If you have an analogue clock on the wall in the classroom, you might want to remove it before this lesson – or, alternatively, make use of it to support students’ visualisation. Q. I want you to imagine an ordinary clock with an hour hand and a minute hand. Can you tell me a time when the angle between the hands is 90°? Students will probably say 9:00 or 3:00, which are both correct, but they might also say other times which are only approximately correct, such as 9:30 or 6:15. One way to deal with this is to divide the board into two sections (without labelling them) and write exact answers in one section and approximate answers in the other. Keep going until you have some times in each section. Q. Why do you think I have put these times here and these times here? Can you think of any more times that would go with these ones? Or with these ones? If students don’t see the difference, ask someone to come to the board and draw exactly where the hands are at one of the times, such as 9:30, which is in the approximate section of the board. The student will probably draw something like and you can ask other students whether they agree. Q. Is that exactly right? If students still don’t see the point, then they may need to examine a real clock. They will realise that the hour hand doesn’t jump suddenly from 9 to 10 but moves smoothly, so that at 9:30 it will be half way between 9 and 10. Q. Can you estimate the size of the angle? Students might be able to work it out exactly, but even if not they should be able to see that it is a bit more than 90°. Q. Which of the times are exactly 90° and which are a bit more? Are any less than 90°? Students might be able to visualise the pictures in their heads or they may need to do some sketches to work out which are which. Q. Can you make up some more times that are just a bit more than 90° and some that are just a bit less? Times such as 10:05, 11:10, 12:15, 1:20 and 2:25 are all a bit less than 90°, whereas times such as 4:05, 5:10, 6:15, 7:20 and 8:25 are all a bit more than 90°. Q. Can you calculate the angle between the hands at 9:30 exactly? You could clarify that by “the angle” you mean the smaller angle, not the reflex angle around the back of the two hands, but students will probably assume this. Students could work on this in pairs. Q. What answer did you get and how did you work it out? Encourage students to explain their different methods. They may realise that because there are 12 numbers around the clock face the angle between each adjacent pair will be 360/12= 30°. So at 9:30 we have 3½ of these, which is 3½ × 30 = 105°. Alternatively, they might say that the angle is 90° plus half of 30°. Q. I want you to work out the angles for some of the other different times on the board and for other times where the hands are close to 90°. You could specify how many different times you want students to calculate the angles for. Students who are very confident with this could try difficult times, like 1:25 (the answer is 107.5°). Students who find it difficult to begin could start with times that are on the hour, such as 10:00, or times that are on the half hour, such as 3:30. Remind the students that they need to be able to explain how they calculated their answers and convince everyone that their answers are correct. It is useful to encourage them to estimate the sizes of their answers before they start and to make sure that their final answers are sensible. Watch out for students assuming that the hour hand is always pointing directly at the number, or assuming that if it is in between then it is always half way between. It is also useful when checking answers to realise that the angle for a time that is any whole number of minutes will be a multiple of ½°, because 360/ 12x60= 0.5°, so there should not be any answers like 135.3777°. (A spreadsheet for calculating the angles is given in the “Additional resources” section.) Can you find a time where the hands lie in a straight line (i.e., make 180° with each other)? The only easy exact answer is 6:00. However, since the hands turn smoothly it will happen another 11 times over the next 12 hours. Since the intervals will be equally spaced, the hands will lie in a straight line every 1 1/11 hours; i.e., at about 7:05:27, 8:10:54, 9:16:22, etc. With similar reasoning students can tackle questions like: • At what times will the hands point in exactly the same direction? • At what times will the hands make exactly a right angle? Confident students could look for times where the hands make a trickier angle, such as 100°, or pose additional questions relating to the second hand (which, of course, is actually the ‘third’ You could conclude the lesson with a plenary in which the students talk about the times that they have chosen and the angles that they have worked out. Sometimes two groups will have worked on the same time, and they may have used different methods, which could be interesting to discuss. Q. Can someone tell us a time that they tried that was quite easy to calculate? What answer did you get? Does that seem about the right size to everyone else? Can you show us how you worked it out? Q. Can someone tell us a time that they did that was a bit harder than that one? How did you work yours out? Q. Did anyone try an even more difficult time than that one? How did you do yours? Q. What ways of working these out did you find useful? Did you find any quick ways or shortcuts? Did you notice anything interesting? The times listed earlier have the following angles: Students could think about patterns they notice, such as why in the first list the angles go down in 2.5°s, whereas the second list they go up in 2.5°s. In the first list, every time we move on 65 minutes the minute hand will be 30° further round but the hour hand will move 30° plus 1/12 of 30°, which is 2.5°, so this is how much the total angle will decrease by each time. Colin Foster is an Assistant Professor in mathematics education in the School of Education at the University of Nottingham. He has written many books and articles for mathematics teachers (see For a huge selection of free maths lesson plans for KS3 and KS4 click here. You may also be interested in...
{"url":"https://www.teachsecondary.com/maths-and-science/view/lesson-plan-ks3-maths-teach-angles-with-clocks","timestamp":"2024-11-01T18:53:22Z","content_type":"text/html","content_length":"76852","record_id":"<urn:uuid:ab08ae33-3003-4663-8862-06d23756e3e5>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00431.warc.gz"}
Chapter 1: Sampling and Data Notes | Knowt • Statistics: the science of planning studies and experiments, obtaining data, and then organizing, summarizing, presenting, analyzing, interpreting, and drawing conclusions based on the data. • Data: collections of observations. • Descriptive Statistics: organizing and summarizing data; by graphing and by numerical values (such as an average). • Inferential Statistics: uses methods that take a result from a sample, extend it to the population, and measure the reliability of the result. • Probability: the chance of an event occurring. • Population: the complete collection of all individuals to be studied. • Sample: a subcollection of members selected from a population. • Sampling: selecting a portion (or subset) of the larger population and studying that portion (the sample) to gain information about the population. Data are the result of sampling from a • Parameter: a numerical measurement describing some characteristic of a population. • Statistic: a numerical measurement describing some characteristic of a sample. • Representative Sample - the idea that the sample must contain the characteristics of the population. One of the main concerns in the field of statistics is how accurately a statistic estimates a • Variable: a characteristic or measurement that can be determined for each member of a population. • Mean: or “average.” • Proportion: part out of the whole/total. • Quantitative (or numerical) data: data that consists of numbers representing counts or measurements. • Qualitative (or Categorical) data: data that consists of names or labels that are not numbers representing counts or measurements. • Discrete data: quantitative data which results when the number of possible values is either a finite number or a countable number. • Continuous data: quantitative data which results when there are infinitely many possible values corresponding to some continuous scale that covers a range of values without gaps, interruptions, or jumps. • Pie Chart: categories of data are represented by wedges in a circle and are proportional in size to the percentage of individuals in each category. • Bar Graph: the length of the bar for each category is proportional to the number or percent of individuals in each category. • Pareto chart: consists of bars that are sorted into order by category size (largest to smallest). • Simple random sample: A sample of n subjects selected in such a way that every possible sample of the same size n has the same chance of being chosen. • Systematic sample: A sample in which the researcher selects some starting point and then selects every kth element in the population. • Stratified sample: A sample in which the researcher subdivides the population into at least two different subgroups (or strata), and then draws a sample from each subgroup. • Cluster sample: A sample in which the researcher first divides the population into sections (or clusters), and then randomly selects all members from some of those clusters. • Convenience sample: A sample in which the researcher simply uses results that are very easy to get. This is not a valid sampling method and will likely result in biased data. • Bias: if the results of the sample are not representative of the population. • Problems with samples: A sample must be representative of the population. A sample that is not representative of the population is biased. • Self-selected samples: Responses only by people who choose to respond, such as call-in surveys, are often unreliable. • Sample size issues: Samples that are too small may be unreliable. Larger samples are better, if possible. In some situations, having small samples is unavoidable and can still be used to draw • Undue influence: collecting data or asking questions in a way that influences the response. • Non-response or refusal of participation: The collected responses may no longer be representative of the population. Often, people with strong positive or negative opinions may answer surveys, which can affect the results. • Causality: A relationship between two variables does not mean that one causes the other to occur. They may be related (correlated) because of their relationship through a different variable. • Misleading use of data: improperly displayed graphs, incomplete data, or lack of context. • Confounding: When the effects of multiple factors on a response cannot be separated. • Explanatory variable: The variable whose effect you want to study; the independent variable. • Response variable: the variable that you suspect is affected by the other variable; the dependent variable. • Experimental Unit: a single object or individual to be measured • Placebo: A treatment that cannot influence the response variable • Double-blinded experiment: one in which both the subjects and the researchers involved with the subjects are blinded. • Nonsampling Error: an issue that affects the reliability of sampling data other than natural variation • Institutional Review Board: a committee tasked with oversight of research programs that involve human subjects
{"url":"https://knowt.com/note/7ee38e60-6a04-4fbc-9318-75187f1ba1e4/Chapter-1-Sampling-and-Data","timestamp":"2024-11-15T01:03:35Z","content_type":"text/html","content_length":"223385","record_id":"<urn:uuid:a0674fb7-df1f-4c05-b7b9-5e07c4baa99e>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00823.warc.gz"}
Assessing rainfall radar errors with an inverse stochastic modelling framework Articles | Volume 28, issue 20 © Author(s) 2024. This work is distributed under the Creative Commons Attribution 4.0 License. Assessing rainfall radar errors with an inverse stochastic modelling framework Weather radar is a crucial tool for rainfall observation and forecasting, providing high-resolution estimates in both space and time. Despite this, radar rainfall estimates are subject to many error sources – including attenuation, ground clutter, beam blockage and drop-size distribution – with the true rainfall field unknown. A flexible stochastic model for simulating errors relating to the radar rainfall estimation process is implemented, inverting standard weather radar processing methods and imposing path-integrated attenuation effects, a stochastic drop-size-distribution field, and sampling and random errors. This can provide realistic weather radar images, of which we know the true rainfall field and the corrected “best-guess” rainfall field which would be obtained if they were observed in a real-world case. The structure of these errors is then investigated, with a focus on the frequency and behaviour of “rainfall shadows”. Half of the simulated weather radar images have at least 3% of their significant rainfall rates shadowed, and 25% have at least 45km^2 containing rainfall shadows, resulting in underestimation of the potential impacts of flooding. A model framework for investigating the behaviour of errors relating to the radar rainfall estimation process is demonstrated, with the flexible and efficient tool performing well in generating realistic weather radar images visually for a large range of event types. Received: 05 Jan 2024 – Discussion started: 30 Jan 2024 – Revised: 26 Jun 2024 – Accepted: 29 Aug 2024 – Published: 23 Oct 2024 Precipitation is challenging to measure accurately, due to its intermittent nature, spatio-temporal variability and sensitivity to environmental conditions (Savina et al., 2012). For urban hydrology, weather radar plays an increasingly important role in quantitative precipitation estimation, due to the high spatio-temporal resolution of the information needed (Thorndahl et al., 2017). The small sizes of urban catchments and the intended hydrological applications – particularly for real time or near real time – require information about precipitation fields at small temporal and spatial scales, from 1–10min and 1–5km, respectively (Berne et al., 2004; Ochoa-Rodriguez et al., 2015; De Vos et al., 2017; Thorndahl et al., 2017; Shehu and Haberlandt, 2021). Despite the suitability of weather radar for obtaining high-resolution rainfall estimates, there are many sources of error in the estimation process, with different sources of uncertainty reviewed in numerous studies (Michelson et al., 2005; Meischner, 2005; Villarini and Krajewski, 2010; Ośróka et al., 2014; Ciach and Gebremichael, 2020). Errors include radar calibration and stability problems, contamination by clutter and anomalous propagation, occultation, a beam-broadening effect with non-uniform beam filling, attenuation and assumptions made about the drop-size distribution (DSD) ( Marshall and Palmer, 1948; Harrison et al., 2000). Some error sources can be corrected for, such as bias and systematic errors, ground clutter (Gabella and Notarpietro, 2002; Ventura and Tabary, 2013 ; Li et al., 2013) and attenuation (Nicol and Austin, 2003; Krämer, 2008; Jacobi and Heistermann, 2016), resulting in significantly improved reliability. Correction procedures are often limited due to the cumulative nature of errors from a superposition of different sources, with complex approaches showing only modest improvements to estimates. Information on the rainfall field is lost, irretrievable, and we do not even know how often this happens. There is therefore an ongoing need to account for errors in the radar rainfall estimation process (Villarini and Krajewski, 2010; Seo et al., 2018), and uncertainties should be acknowledged and modelled (Ciach et al., 2007; Gires et al., 2012; Villarini et al., 2014; Rico-Ramirez et al., 2015). The poor quantification of uncertainties was highlighted as a fundamental issue in AghaKouchak et al. (2010a) and expanded in AghaKouchak et al. (2010b). An error model described in Hasan et al. (2014) found that uncertainties were easily identifiable for unbiased reflectivity–rainfall (Z–R) relationships, incorporating radar reflectivity uncertainties in Hasan et al. (2016). Variograms were used to represent radar rainfall uncertainties (Cecinati et al., 2017), eliminating the need for a covariance matrix for faster and more flexible calculation of the spatial correlation of errors. Uijlenhoet and Berne (2008) created a stochastic model of range profiles for the DSD, using a Monte Carlo framework (Berne and Uijlenhoet, 2006) to estimate uncertainties using two attenuation correction schemes. Yan et al. (2021) imposed random and non-linear radar errors on simulated rainfall fields, with Z–R relationship errors appearing to have little influence overall. Error quantification is challenging, and errors propagate into future estimates for any model which requires rainfall as an input. The fundamental limitation in radar correction is that the “true” rainfall field is not available for comparisons. In this study, the aim is to work backwards to obtain an estimate of the uncertainty in the radar rainfall estimation process. Using a new model for simulating realistic space–time rainfall event fields with high resolution (matching that of a UK standard C-band weather radar) (Green et al., 2024), a clustered parameterization based on radar rainfall events from High Moorsley weather radar operated by the UK Met Office was extracted. These simulation outputs are treated as the true rainfall field. Errors relating to each step of the radar rainfall estimation process are then imposed on the simulated rainfall field to obtain an ensemble of spatio-temporal error fields for each event in a stochastic manner, forming a superposition of different error sources. This is done by inverting standard radar processing methods, allowing identification of the frequency of the occurrence and extent of the loss of important information. In this study, the data and study area are first discussed in Sect. 2, together with the simulation methods applied to obtain realistic space–time rainfall fields. The methodology for the radar error model is then outlined in detail in Sect. 3, with detailed explanations for each step of the model. Example event results are discussed in Sect. 4.2–4.5, with more general results based on event images given in Sect. 4.6–4.9. A discussion and conclusions are given in Sect. 5, with model limitations, potential for generalization and future work also discussed. An ensemble of realistic rainfall events is used, generated using the clustered rainfall model outlined in Green et al. (2024). This model uses fast Fourier transform (FFT) methods to efficiently generate three-dimensional rainfall event fields with a high resolution matching that of radar data (1km, 5min) for a 200×200km domain. Events have prescribed properties, including the correlation structure, spatial anisotropy, spatio-temporal anisotropy, marginal distribution, non-zero rainfall proportions and advection. The model is used with multi-dimensional scaling and hierarchical clustering to parameterize rainfall event simulations for 100 rainfall events. A year of processed dual-polarization C-band weather radar data is used to parameterize simulations of realistic space–time rainfall fields. This is obtained from weather radar from High Moorsley ( Met Office, 2003), which is located near Durham, England (54°48^′20^′′N, 001°28^′32^′′W), with a wavelength and frequency of 5.3cm and 5.6GHz, respectively. This operates between five elevation angles from 0.5 to 2° with 1° beam width, taking a scan at each elevation approximately every 5min. This section outlines a novel model for imposing errors in the radar rainfall estimation process on a rainfall field, focusing on four main error sources: random noise effects, attenuation effects, DSD error and sampling through estimation variance. Section 3.1–3.3 describe the error model in more detail, outlined in Fig. 1 and written in Python. While the model is by no means comprehensive, random error is included. This is designed to provide a framework for investigating the impact of these errors, improving understanding of the estimation process. 3.1Re-projecting to polar coordinates The simulated rainfall fields are given on a regular three-dimensional Cartesian grid. To apply radar processing methods in reverse, the data must be re-projected into a polar coordinate system. Using nearest-neighbour interpolation methods, the Cartesian grid is converted into polar data $\begin{array}{}\text{(1)}& Z\left(t,x,y\right)\to Z\left(t,\mathit{\theta },r\right)\end{array}$ for ray angles $\mathit{\theta }=\mathrm{1},\phantom{\rule{0.125em}{0ex}}\mathrm{2},\phantom{\rule{0.125em}{0ex}}\mathrm{\dots }\phantom{\rule{0.125em}{0ex}}\mathrm{360}$ with ray bins $r=\mathrm{1}, \phantom{\rule{0.125em}{0ex}}\mathrm{2},\phantom{\rule{0.125em}{0ex}}\mathrm{\dots }\phantom{\rule{0.125em}{0ex}}\mathrm{167}$ of width 600m and average elevation angle 1°. This mirrors the radar configuration of the High Moorsley weather radar used for parameterization. The different elevation angles and the differences in the sampling sizes of the pixels are incorporated through the use of estimation variance in Sect. 3.5. 3.2Attenuation effects A constrained gate-by-gate approach is applied to estimate the path-integrated attenuation (PIA) for each radar ray by inverting standard forward-attenuation models (Krämer and Verworn, 2008; Jacobi and Heistermann, 2016). Inverting the process gives an estimated attenuated reflectivity Z[i] rate for the ith bin of width Δr as $\begin{array}{}\text{(2)}& \begin{array}{rl}& \stackrel{\mathrm{^}}{{Z}_{i}}={Z}_{i,\mathrm{corr}}-\sum _{j=\mathrm{0}}^{i-\mathrm{1}}\stackrel{\mathrm{^}}{{k}_{j}}\text{ and}\\ & \stackrel{\mathrm {^}}{{k}_{i}}=c{\left[{Z}_{\mathrm{corr},i}+\left(\mathrm{2}\mathrm{\Delta }r-\mathrm{1}\right)\sum _{j=\mathrm{0}}^{i-\mathrm{1}}\stackrel{\mathrm{^}}{{k}_{j}}\right]}^{d}\end{array}\end{array}$ for constants c and d. This results in a realistic radar image of attenuated reflectivity in a polar coordinate system at each time step of the event, denoted by ${Z}_{\mathrm{corr}}\left(t,\mathit{\ theta },r\right)$. Using the scheme described above, for rainfall intensity R(t) at time t, we get a PIA estimate PIA(t) of where PIA(t)=f{R(t)} is a function based on the estimation algorithm outlined in Jacobi and Heistermann (2016). 3.3Random noise effects When considering empirical variograms for weather radar images, Pegram and Clothier (2001) found that 10% of the variability in images corresponded to nugget effects, highlighting the need for random noise effects in radar pixel simulations. This noise is also evident in the marginal distributions of radar images, with the full year and an example “dry” day image for the High Moorsley weather radar given in Fig. 2, showing a large number of values in the range −32 to 0dBZ. Although rainfall rates of less than 0.01mmh^−1 are hardly noticeable in terms of rainfall accumulations, this high density of low-reflectivity rates in radar images may have a significant effect on attenuation estimates along the radar rays. This noise may be attributed to the measuring apparatus, non-meteorological echoes or most likely a combination of various different sources. Errors are treated as random noise, representing a combination of errors from unknown sources clearly evident in real radar images. The random noise field is added to rainfall values to prevent numerical instabilities, with the marginal distribution from Fig. 2 converted to rainfall rates in Fig. 3. When considering the logarithm of weather radar noise (i.e. dry day images and values (dBZ) corresponding to rainfall rates less than 0.1mmh^−1), these are sufficiently Gaussian to satisfy the assumption of a log-normal marginal distribution for random noise effects. A log-normal marginal distribution allows for a simple and easy transformation when simulating the field using Gaussian random field theory. Empirical variograms of these values were estimated to identify an appropriate correlation structure, which has a very short correlation range of around 5km. The optimal spatial transformation for minimizing least squares between the marginal variogram values of the two spatial dimensions is used to estimate field anisotropy from empirical variogram fields, with estimates suggesting that isotropy of random noise fields is a valid assumption in this case. The three-dimensional noise field denoted by $\mathit{\epsilon }\left(t,x,y\right)$ is assumed to be log-normal, with a marginal distribution $\begin{array}{}\text{(4)}& \mathit{\epsilon }\sim \mathrm{LN}\left({\mathit{\mu }}_{\mathit{\epsilon }},{\mathrm{\Sigma }}_{\mathit{\epsilon }}^{\mathrm{2}}\right),\end{array}$ where ${\mathit{\mu }}_{\mathit{\epsilon }}=-\mathrm{5.3}$ and σ[ε]=1.7. A Gaussian random field is simulated with an exponential correlation structure of ${\mathit{\rho }}_{\mathit{\epsilon }}\left (h\right)=\mathrm{exp}\mathit{\left\{}-h/{r}_{\mathit{\epsilon }}\mathit{\right\}}$ for a short range with r[ε]=5 and a nugget effect of n[ε]=0.35. This is transformed using an inverse Gaussian score transformation and then exponentiated, resulting in a random noise field with the desired marginal distribution and correlation structure. An example field is included in Fig. 3, from which we can see that the variability is slightly higher than in existing images. This is however selected to preserve the proportion of −32dBZ reflectivity rates in images, with any values less than −32dBZ treated as −32dBZ. 3.4Drop-size-distribution errors Attenuated rainfall rates $\stackrel{\mathrm{̃}}{R}\left(t\right)$ can then be added to the three-dimensional noise field $\mathit{\epsilon }\left(t,x,y\right)$, which can then be converted into a reflectivity field. A Z–R relationship is typically used, of the form Z=10log[10](aR^b) for reflectivity Z (dBZ), rainfall R (mmh^−1) and constants a and b, which typically take the values a=200 and b=1.6 (Harrison et al., 2000). Constant values for a and b are based on the assumption that the DSD varies spatially and temporally in a way characteristic of a particular rainfall or weather type. Despite this, a fixed Z–R relationship results in a severe underestimation of peak rainfall intensities due to the failure to account for natural variations in the DSD with intensity (Schleiss et al., 2020). Lee et al. (2007) indicated that the overall DSD variability cannot be adequately explained by a single parameter. In Libertino et al. (2015), a varying Z–R relationship in space and time improved rainfall accumulations at the event scale when compared to a fixed relationship. A large amount of scatter around the average power-law relationship is related to the various microphysical processes that are responsible for the DSD variability. To account for this variability, in an attempt to generate more realistic reflectivity images, we assume that $a=A\left(x,y\right)$ is a two-dimensional field varying in space. As the simulated rainfall events all have a fairly short duration (6h or less), a constant DSD in time is initially used. This assumes that A is fairly constant over the time period considered, although the model is flexible and the dimensions of A can easily be extended to include time. Parameters in the Z–R relationship typically take values in the ranges $a\in \left(\mathrm{30},\mathrm{1000}\right)$ and $b\in \left(\mathrm{0.8},\mathrm{2}\right)$ (Battan and Theiss, 1973; Smith and Krajewski, 1993), and so parameter b is still treated as constant but is sampled from a Gaussian distribution with a low variance centred around a value of μ[b]=1.6. This gives attenuated reflectivity estimates of $\begin{array}{}\text{(5)}& \stackrel{\mathrm{̃}}{Z}\left(t\right)=\mathrm{10}{\mathrm{log}}_{\mathrm{10}}\left\{A{\left[\stackrel{\mathrm{̃}}{R}\left(t\right)+\mathit{\epsilon }\left(t\right)\right]}^ $\begin{array}{}\text{(6)}& A\left(x,y\right)\sim {N}_{\mathrm{2}}\left({\mathit{\mu }}_{a},\mathit{\left\{}\mathrm{1}+g\left(x,y\right)\mathit{\right\}}{\mathrm{\Sigma }}_{a}\right)\phantom{\rule {0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}b\sim N\left({\mathit{\mu }}_{b},{\mathit{\sigma }}_{b}^{\mathrm{2}}\right)\end{array}$ for correlation structure ${\mathit{\rho }}_{a}={\mathit{\sigma }}_{a}\mathrm{exp}\mathit{\left\{}-h/{r}_{a}\mathit{\right\}}$, where μ[a]=220, σ[a]=2, r[a]=30, μ[b]=1.6 and σ[b]=0.02. The function $g\left(x,y\right)={\mathrm{\Sigma }}_{E}\left(x,y\right)$ is the estimation variance based on the pixel location (x, y) and on the proportion of the rainfall volume that the radar can see for a given distance, which is discussed further in Sect. 3.5. Attenuated reflectivity fields are rounded to one decimal place and are limited by a minimum value of −32dBZ, as is the case for actual reflectivity data. 3.5Radar sampling Due to the nature of weather radar sampling, polar observations close to the receiver sample from a much smaller volume than those further away, as can be seen for an example pixel in Fig. 4. Earth curvature effects, cloud height and bright band effects also impact the sampling volume, and furthermore radar observations above the freezing level are unavailable due to the high reflectivity of melting precipitation (Hooper and Kippax, 1950; Kitchen et al., 1994; Hall et al., 2015). To address this, sampling errors are included as part of the DSD model designed to increase uncertainty in the DSD where the volume of rainfall sampled by the radar beam is lower. Areas where it is unrealistic for the radar beam to be sampling rainfall (e.g. above the bright band level or outside the base and top of the cloud) are removed with flexible model parameters which can be adjusted. In this case, the configuration of High Moorsley weather radar is used (see Sect. 2). The radar sampling error is defined using estimation variance principles by representing the change in uncertainty between the actual and sampled volumes. Assuming a pixel is a vertical column denoted by V dependent on the radar configuration and the distance of the pixel from the weather radar, parts of this vertical column will be sampled by the radar, denoted by ν (see Fig. 6). The estimation variance σ[E] can be defined as $\begin{array}{}\text{(7)}& {\mathit{\sigma }}_{\mathrm{E}}=\mathrm{2}\stackrel{\mathrm{‾}}{\mathit{\gamma }}\left(V,\mathit{u }\right)-\stackrel{\mathrm{‾}}{\mathit{\gamma }}\left(\mathit{u },\ mathit{u }\right)-\stackrel{\mathrm{‾}}{\mathit{\gamma }}\left(V,V\right),\end{array}$ where $\stackrel{\mathrm{‾}}{\mathit{\gamma }}$ is the mean variogram and V and ν are the total and sampled volumes, respectively. By discretizing the vertical pixel into small blocks, we can estimate the empirical variograms in Eq. (7), with an example of discretized blocks contributing to each term in Fig. 5 based on a variogram model for the vertical distribution of the DSD. We consider a vertical column of rainfall, assuming that the cloud base, bright band and cloud top are 1, 4 and 10km and assuming an exponential variogram model for below the bright band level. Discretizing the vertical column of rainfall into blocks of height 10m (see Fig. 6), the empirical distances for each discretized block are calculated using the variogram model. Parameterization is based on analyses of vertical weather radar data (Berne et al., 2005), considering the vertical raindrop volume distribution for a range of rainfall rates and heights. The variance of sampling volumes in different ranges supported the concept, with simulations compared to vertical radar images for validation. In this section, the performance of the model is considered by looking at example fields for each step of the simulation process. We consider 100 rainfall events simulated using the methods outlined in Green et al. (2024) and parameterized by radar rainfall events from High Moorsley weather radar (see Sect. 2). For each event, the model was run n=100 times to obtain an ensemble of weather radar images. To enable direct comparisons, each radar image is corrected using a standard radar rainfall estimation process resulting in corrected rainfall for each ensemble member, similar to what would be obtained from a radar rainfall image. This includes an attenuation correction (Jacobi and Heistermann, 2016), a Z–R relationship (Harrison et al., 2017) and re-projection onto a Cartesian grid to allow for direct comparisons with the original simulated rainfall field R. Defining the difference between the simulated true rainfall field (R) and the corrected rainfall field (R[corr,i]) after applying the radar error model as the error, we consider specific events R, individual event time steps R(t) throughout the ensemble (i.e. image-based) and the behaviour of all events. To investigate the capacity of the radar error model to capture uncertainty, we consider the error metrics outlined below. 1. The mean bias between corrected ensemble members and original simulated rainfall $\begin{array}{}\text{(8)}& \mathrm{BIAS}\left(R\right)=\frac{\mathrm{1}}{n}\sum _{i=\mathrm{1}}^{n}\left(R-{R}_{\mathrm{corr},i}\right)\end{array}$ for ensemble members i=1,…,100 2. The root-mean-square error (RMSE) across the ensemble $\begin{array}{}\text{(9)}& \mathrm{RMSE}\left(R\right)=\sqrt{\frac{\mathrm{1}}{n}\sum _{i=\mathrm{1}}^{n}{\left(R-{R}_{\mathrm{corr},i}\right)}^{\mathrm{2}}}\end{array}$ 3. The pixel variability throughout the ensemble, using the standard deviation $\begin{array}{}\text{(10)}& \mathrm{SD}\left({R}_{\mathrm{corr}}\right)=\sqrt{\frac{\sum _{i=\mathrm{1}}^{n}{\left({R}_{\mathrm{corr},i}-{\stackrel{\mathrm{‾}}{R}}_{\mathrm{corr},i}\right)}^{\ 4. The minimum and maximum RMSE values across the ensemble $\begin{array}{}\text{(11)}& \begin{array}{rl}& {\mathrm{RMSE}}_{\mathrm{min}}\left(R\right)={\mathrm{min}}_{i=\mathrm{1},\phantom{\rule{0.125em}{0ex}}\mathrm{\dots },\phantom{\rule{0.125em} {0ex}}n}\left\{\sqrt{{\left(R-{R}_{\mathrm{corr},i}\right)}^{\mathrm{2}}}\right\}\text{and}\\ & {\mathrm{RMSE}}_{\mathrm{max}}\left(R\right)={\mathrm{max}}_{i=\mathrm{1},\phantom{\rule{0.125em} {0ex}}\mathrm{\dots },\phantom{\rule{0.125em}{0ex}}n}\left\{\sqrt{{\left(R-{R}_{\mathrm{corr},i}\right)}^{\mathrm{2}}}\right\}\end{array}\end{array}$ 5. The average error across the ensemble, as a percentage of the original simulated rainfall field for all non-zero simulated rainfall pixels $\begin{array}{}\text{(12)}& {p}_{\mathrm{R}}=\frac{\mathrm{1}}{n}\sum _{i=\mathrm{1}}^{n}\left(\mathrm{1}-\frac{{R}_{\mathrm{corr},i}}{R}\right)×\mathrm{100}\phantom{\rule{0.125em}{0ex}}\mathit where R>0. Additional metrics are defined to identify cases where significant amounts of rainfall are missing in corrected rainfall fields. Crane (1979) referred to distortions in storm structures, as a result of attenuation, as shadows. In this study we define rainfall shadows as areas where information about the rainfall field is lost from a simulated weather radar image after correction methods have been applied. A formal definition of a rainfall shadow in this case is taken to be a pixel where the simulated rainfall is significant (i.e. R>1mmh^−1) but the corrected rainfall is much lower (less than 10%) than the original simulated rate (i.e. ${R}_{\mathrm{corr}}/R\le \mathrm{0.1}|R>\mathrm{1}$). We first consider example fields for each stage of the error model in Sect. 4.1 and then three events which show high bias and low and high variability in Sect. 4.2–4.4, quantified based on the above metrics at an event level. The behaviour of individual radar images is considered by looking at the average, minimum and maximum behaviours over the ensemble, including the variability. Attempts are made to find metrics and properties of rainfall images with the aim of identifying instances where there is a very high level of uncertainty or error arising from the rainfall estimation process. The impact of the rainfall location with respect to the radar is considered, together with identifying how often significant information on the rainfall field is lost. For individual image-based errors, we introduce three image metrics below relating to rainfall shadows: 1. ARS is the actual area (km^2) of the radar image that contains rainfall shadows. 2. LARS is the largest single area (km^2) of rainfall shadows in a radar image. 3. PRS is the proportion of significant rainfall (i.e. R>1mmh^−1) that is shadowed. 4.1Example fields For an example time step of a simulated event, each stage of the radar error model process is given in Fig. 7. The final radar image (Fig. 7f) appears realistic, with clear areas of rainfall similar to raw radar images obtained from the High Moorsley weather radar. A significant proportion of the signal is attenuated towards the edge of the domain, particularly in the top right of the image. 4.2Event A: high bias The event shown in Fig. 8a has an area of moderate-intensity rainfall in the centre of the image with a large extent, resulting in high bias. The simulated radar image for an ensemble member associated with the event looks realistic, with the reflectivity and corrected rainfall rates (see Fig. 8b and c) showing significant rainfall amounts missing throughout. The average bias, RMSE and pixel variability corresponding to the event in Fig. 8 are given in Fig. 9. The average bias and RMSE are very high (see Fig. 9a and b), taking values over 5mmh^−1. Figure 9c shows very low pixel variability for most of the image, which is reflected in the range of RMSE values throughout the ensemble given in Fig. 9d and e, with a large area in the centre of the image showing a bias and RMSE greater than 5mmh^−1, suggesting that the rainfall is consistently underestimated throughout the ensemble. A large area of moderate-intensity rainfall on top of the radar is overcorrected, mimicking effects resulting from full attenuation of the radar signal by intervening rainfall. In this case, the correction techniques will not improve the image significantly, and so information on a large portion of the rainfall field is lost, particularly when forward-attenuation correction algorithms are implemented. This result is reiterated when looking at the rainfall shadows in Fig. 10b, where around one-fourth of the image is shadowed for 100% of the ensemble members. This event has a very high average bias, with pixel variability varying drastically throughout the image. Large areas of rainfall are missing, and the differing variability throughout and spatial distribution of the error structure suggest that a mean field bias or multiplicative correction would not improve the estimates significantly. The information on the rainfall structure and rates would be lost in this case. 4.3Event B: low variability Figure 11a shows a rainfall event with a small extent of light rainfall and mostly dry conditions throughout the image, resulting in low variability throughout the simulated ensemble. There is a small amount of light rainfall in the centre left of the radar domain, with the corrected rainfall image in Fig. 11c exhibiting lower rainfall rates here than the original simulated rainfall. The radar image in Fig. 11 appears realistic, with a small amount of signal damping towards the left of the image in a range beyond the rainfall seen in Fig. 11a. The corresponding bias and RMSE for this event are given in Fig. 12, together with the pixel variability and maximum and minimum RMSEs over the ensemble and the average proportional error. Over the ensemble, the average bias (see Fig. 12a) is close to zero, except for the low-intensity rainfall areas (with at most 0.5mmh^−1), with low average, minimum and maximum RMSEs in Fig. 12b, d and e. The pixel variability is slightly higher in the rainfall area (see Fig. 12c), with pixels at a larger distance from the transmitter in this area showing lower pixel variability (less than 0.02mmh^ −1) and the remaining variability appearing uniform. In Fig. 13b, the rainfall is shadowed in 100% of the rainfall ensemble (i.e. all the ensemble members) in the area of low-intensity rainfall identified in Fig. 11. The frequency of shadows over the ensemble mostly has values of either zero or one. This event has very low variability between the ensemble members, likely due to (mostly) non-zero rainfall amounts in the images. 4.4Event C: high variability Figure 14 shows an event with a small area of heavy rainfall rates, which results in high variability in event errors. Most of the radar domain shows zero rainfall rates, except for a very small area of high-intensity rainfall (greater than 100mmh^−1) towards the top of the domain. The example radar image (see Fig. 14b) is again realistic, showing mostly noise. Radial lines in the top right past a small amount of high-intensity rainfall suggest that attenuation effects have not been sufficiently corrected. In Fig. 14c the corrected rainfall image has areas of high-intensity rainfall which are overestimated due to cumulative errors introduced as part of forward-attenuation correction procedures. Although there is no large area of high-intensity rainfall, the rainfall field's spatial distribution has still been significantly impacted by the errors caused by attenuation. From Fig. 15a and b, these radial lines show a positive average. However, none of these exceeds 0.2mmh^−1. Errors are not noticeable from the maximum RMSE field but have a much higher minimum RMSE than the rest of the image (see Fig. 15d and e). In this case, it appears that the attenuated high-intensity rainfall has been overcorrected, showing an average proportional error greater than one. Pixels within the affected rays at distances further from the radar transmitter are underestimated. Although these are mostly light rainfall rates, the rays have been significantly impacted, showing a clear gap when comparing the simulated and corrected rainfall fields. The shadowed pixels in Fig. 16 show radial lines in areas where the corrected rainfall is less than 10% of the simulated rainfall. Most of these pixels do not have significant rainfall rates and so are not classed as shadowed, with only a handful of pixels showing shadows and none for 100% of the ensemble, resulting in high variability within the ensemble. From the videos of event errors, this high-intensity rainfall moves across the image, affecting different rays, with some pixels showing rainfall shadows past the high-intensity rainfall. This is not consistent throughout the ensemble, with most pixels showing shadows in less than 100% of the ensemble members. This small area of high-intensity rainfall has resulted in high variability across the ensemble, which is impacted significantly by the overestimation of high-intensity rainfall rates, with the DSD error also contributing to the variability for such a high rainfall rate. 4.5Specific events: summary In cases where the absolute bias is low, rainfall shadows may still exist, suggesting that average bias is a poor metric to use when identifying event errors. Even with a low absolute bias, the rainfall could be overestimated for pixels closer to the radar and underestimated past these pixels (which is very common along radar rays where attenuation has been overestimated). In these cases, the spatial distribution of rainfall is often incorrect, which could have detrimental effects when using the rainfall fields for any quantitative modelling. Typically, events with high bias correspond to events with large rainfall shadows, with shadow frequencies close to either zero or one, suggesting that, in cases where there is high bias, the model uncertainty is low. Simulated rainfall events which exhibited high ensemble variability resulted from potential interactions between small areas of high-intensity rainfall and DSD errors, which corresponded to higher variability in shadows throughout the ensemble. Videos of all the simulated events (and the corresponding errors) are available in Green (2023). 4.6Individual image-based errors: average ensemble behaviour The relationship between the average rainfall rate and the proportion of rainfall in an image with the image RMSE is shown in Fig. 17a, showing that events with high average rainfall rates and large, heavy rainfall proportions have the highest RMSE. Images showing fairly low proportions (i.e. 5%–10%) of heavy rainfall still exhibit a fairly high RMSE. Figure 17b shows the relationship between the mean and standard deviation of non-zero rainfall rates with the image RMSE, showing a higher RMSE for events with a high average and standard deviation in non-zero rainfall rates. This may be due to the large errors resulting in large gradients between pixels, where a large rainfall rate along a ray damps the signal so that subsequent observations are much lower, increasing the pixel variability in images. Figure 18 shows the average rainfall rate (see Fig. 18a–e) and the proportion of non-zero rainfall (see Fig. 18f–j) in event images for the average bias, RMSE, ARS, PRS and LARS. For higher rainfall rates and proportions of rainfall, the bias increases, with low proportions of low rainfall rates exhibiting a negative average bias. This is also the case for the average non-zero rainfall rates. However, the relationship between the proportion of non-zero rainfall and the RMSE is less distinct, highlighting the significant impact of very small areas of intense rainfall rates on the image's RMSE. Figure 18 also shows that there are some low average rainfall rates and proportions of non-zero rainfall which correspond to high areas of the shadowed rainfall. This may be attributed to noise. However, this does suggest again that events do not need to include intense large-scale rainfall areas to result in significant rainfall shadows. The proportion of shadowed rainfall also appears to increase exponentially as the average non-zero rainfall rate increases, which is not the case with the proportion of rainfall in the images. The relationships in Fig. 18 are heavily skewed by a high density of images with low average rainfall rates and proportions. For a clearer image of the behaviour for event images, see the average rainfall rates of 0.1, 0.5 and 1mmh^−1 in Fig. 19. This shows a much clearer relationship between the corrected rainfall field and the rainfall shadows, with strong correlations between the average bias and the average rainfall rate (see Fig. 19a, f and k). The relationship becomes less clear for higher rainfall thresholds for conditional averages, with the strongest correlation between the non-zero rainfall average and the RMSE (see Fig. 19b). From Fig. 19c, h and d, the ARS in the images may increase exponentially with increasing average rainfall rates. However, this may be skewed by the small number of images with very high ARS values (larger than 1000km^2). For the thresholded rainfall rates, most correspond to a low ARS value but appear to increase exponentially, particularly in Fig. 19c. There are some images with low ARS values which have high-threshold rainfall rates, which may be a result of small areas of high-intensity rainfall, where there are no resulting shadows as there are no other areas of significant rainfall rates (R(t)). The relationship between the proportion of rainfall is more complex (see Fig. 19d, i and n), with two distinct types of image behaviour. While a correlation is evident between the conditional average rainfall rates and the PRS values, there is clearly a large number of images with low average rainfall rates and large PRS values. These high PRS values with low average rainfall rates most likely correspond to images with a very low proportion of rainfall rates large enough to be classed as shadows. If just 1 rainfall pixel is shadowed, there would be a large increase in the PRS value in this case, highlighting the impact that shadows have on images with low rainfall extents, with fairly low average rainfall rates. From Fig. 19e, j and o the LARS values appear to have a similar relationship with the ARS values. However, these are again skewed by very high ARS values, and so the relationship is less clear. Considering that the overall average behaviour of the ensemble does not take full advantage of the model framework and different ensemble member properties, important variation information is lost, which may have increased understanding of the uncertainty associated with the radar rainfall estimation process. 4.7Individual image-based errors: ensemble variability The variability between the ensemble members for each event image is considered using the ensemble standard deviation to identify areas where the event errors have a high level of uncertainty in image properties. Figure 20a shows the relationship between the average non-zero rainfall rate and the standard deviation of the image bias, showing rainfall images with an average non-zero rainfall rate of less than 0.5mmh^−1. There is no clear relationship between the average rainfall and variability in the bias of estimates. For images with an average non-zero rainfall rate above 0.5mmh^ −1, there appears to be a strong positive correlation between the two, suggesting that, past this image threshold, the uncertainty in the image bias is directly proportional to the average non-zero rainfall rate. The relationship between the average non-zero rainfall rate and the image's RMSE variability in Fig. 20b is very different, appearing to be inversely proportional to the variability in the image's RMSE. This may be attributed to the fact that, for higher rainfall rates, there are more rainfall shadows. In areas where there are rainfall shadows, the variability between the ensemble members decreases significantly due to the effects of the radar ray signal being fully damped. There is a moderate relationship between the variability PRS and the average non-zero rainfall. However, this is less distinct. The standard deviations of the ARS, PRS and LARS are given in Fig. 21. From Fig. 21a we can see that the uncertainty in the ARS increases with an increasing average rainfall rate, with similar behaviour for the LARS in the images (see Fig. 21c). However, there are a lot of images with no ARS skewing the relationship. Again, the relationship between the average non-zero rainfall rate and the variability of the PRS is not clear (see Fig. 21b). 4.8Rainfall location: second moment of area The location of rainfall with respect to the radar location will also impact the error structure, which is reflected in the variability in the PRS for different average rainfall rates. Atlas and Banks (1951) stated that distortion due to range attenuation includes displacement towards the radar of maximum intensity, packing contours on the near side of the storm and suggesting that the location of high-intensity rainfall will also have an impact on errors. The amount of rainfall lost to attenuation effects is likely to be higher for images with high-intensity rainfall in a more central location, due to the cumulative nature of attenuation effects along a radar ray. As illustrated in Fig. 22, the occurrence of rainfall close to the radar transmitter affects more rays, and its attenuation affects more “downstream” pixels. There is evidence of this in Fig. 9, where high-intensity rainfall occurred in the centre of the image, resulting in very high errors. To formally investigate this, we introduce the second moment of area of a rainfall (or reflectivity) image, estimated by considering the centroid of each image using the second areal moment. For the rainfall field R(t) on a Cartesian grid with dimensions (N[x], N[y]), the second moment of area can be estimated as $\begin{array}{}\text{(13)}& \begin{array}{rl}{M}_{\mathrm{R}}\left(t\right)& =\sum _{x=\mathrm{1}}^{{N}_{x}}\sum _{y=\mathrm{1}}^{{N}_{y}}R\left(x,y,t\right)d\left(x,y{\right)}^{\mathrm{2}}\\ & =\ sum _{x=\mathrm{1}}^{{N}_{x}}\sum _{y=\mathrm{1}}^{{N}_{y}}R\left(x,y,t\right)\left\{{\left(x-{r}_{x}\right)}^{\mathrm{2}}+{\left(y-{r}_{y}\right)}^{\mathrm{2}}\right\},\end{array}\end{array}$ where $d\left(x,y\right)=\sqrt{\left(x-{r}_{x}{\right)}^{\mathrm{2}}+\left(y-{r}_{y}{\right)}^{\mathrm{2}}}$ is the distance of a pixel from the radar location (r[x], r[y]). Both the actual and normalized rainfall fields are used to see the different impacts between the shapes of the field, with and without considering the actual magnitudes of the estimates. The normalized image moments ${\ stackrel{\mathrm{̃}}{M}}_{R}\left(t\right)$ are defined as $\begin{array}{}\text{(14)}& {\stackrel{\mathrm{̃}}{M}}_{\mathrm{R}}\left(t\right)=\sum _{x=\mathrm{1}}^{{N}_{x}}\sum _{y=\mathrm{1}}^{{N}_{y}}\frac{R\left(x,y,t\right)}{{R}_{\mathrm{tot}}\left(t\ where ${R}_{\mathrm{tot}}\left(t\right)=\sum _{x}\sum _{y}R\left(x,y,t\right)$ is the total rainfall (R(t)). Figure 23 shows the relationship between the second moment of area M[R](t) and the normalized second moment of area ${\stackrel{\mathrm{̃}}{M}}_{\mathrm{R}}\left(t\right)$ for event images and corresponding ensemble metrics including the average bias, RMSE, ARS, LARS and PRS. There are significant differences between the moments and normalized moments, with positive correlations for image moments and an inverse relationship for normalized image moments, which is particularly prominent when considering the RMSE. There is a significant positive correlation between image moment and RMSE, ARS and PRS, with the relationship for LARS being less clear. This suggests that, for the second moment of area calculated on rainfall rates, the strong non-linear dependence on absolute rainfall amount overrides all other information on the field, such as rainfall location. For the normalized second moment of area, this dependence has been removed, and so the relationship is solely based on the impact of the rainfall location. In this case, a smaller second moment of area (corresponding to a more central rainfall location) suggests larger rainfall shadows. Figure 24 shows the relationship between the second moment of area and the normalized second moment of area for events, together with the ensemble uncertainty for the image metrics given in Fig. 23. This shows that images with a larger image moment have a lower RMSE standard deviation. The normalized image moments appear to be positively correlated with the standard deviation of the bias and RMSE and negatively correlated with the ARS, PRS and LARS. The variability in rainfall shadows, for both the ARS and PRS, appears to decrease with increasing normalized image moments. Image moments could be a key piece of information when attempting to identify radar images with high uncertainty in estimates, particularly when using moments calculated from normalized rainfall rates across an image. In conclusion, this analysis suggests that a second moment of area has the potential to identify high uncertainty and missing information. 4.9Rainfall shadow frequency The aim of this section is to identify how often rainfall shadows occur. Due to ensemble variability, to ensure that frequencies are not overestimated, we consider the “best-case scenario” over the ensemble. In terms of the ensemble, for each image the ensemble member with the lowest errors is selected. This prevents overestimation, and as rainfall fields are parameterized with existing corrected radar rainfall images that may themselves be subject to rainfall shadows, the simulations may inherently underestimate the frequencies, so considering the minimum likelihood of occurrence makes intuitive sense. Percentiles are given for the minimum LARS, PRS and ARS in Table 1. The empirical cumulative distribution functions for the minimum LARS, PRS and ARS are estimated for simulated events, with attenuation estimated from rainfall and reflectivity and given in Fig. 25. From the median of the proportion of the rainfall images simulated, half of the images have at least 3% of their significant rainfall rates shadowed. When considering the ARS in the images, we see that 25% of the images have an ARS of 45km^2 and that 5% have a LARS of over 50km^2. A missing area of significant rainfall of this size, particularly for small or urban catchments, constitutes a major underestimation of flood risk, resulting in incorrect information provided in flood warnings. This highlights the importance of gaining an improved understanding of rainfall shadows and provides a motivation for this project and future research in this area. Gaps caused by the rainfall shadows identified would result in underprediction of flooding, impacting both flood warnings and flood defence designs. 5Discussion and conclusions Errors relating to several different aspects of the radar rainfall estimation process are considered, using a radar error model outlined in detail. This model is applied to realistic simulated rainfall events in a stochastic manner, generating an ensemble of radar images corresponding to each time step of a rainfall event. A log-normal random noise field was imposed on rainfall estimates to account for underlying non-specific noise. The DSD uncertainty is included by replacing the multiplicative parameter a in the Z–R relationship with a two-dimensional spectral random field, with field variability determined by radar sampling volumes. Attenuation effects are imposed by inverting standard gate-by-gate correction algorithms (Jacobi and Heistermann, 2016). To enable direct comparison between the simulated rainfall before and after imposing the radar error model, each radar image is corrected using a standard radar rainfall estimation process. This results in a corrected rainfall field for each ensemble member, similar to what would be obtained from real radar rainfall images, allowing us to identify when and where significant and/or systematic errors may This concept provides a methodology for developing a better understanding of errors related to the radar rainfall estimation process. By generating a true rainfall field and subsequently imposing errors to allow for comparison with correct best-guess stochastic radar rainfall estimates, we can address the fundamental limitation of weather radar correction schemes – that the real rainfall field is not known for comparison. An investigation of the spatio-temporal behaviour of the error structure is then possible, which provides key information on the radar rainfall estimation process. A relationship between rainfall shadows, high bias and uncertainty related to the amount of rainfall (i.e. proportions and rates) in images was found. The impact of the rainfall location with respect to the weather radar is considered by introducing the second moment of area, showing that more central rainfall in the radar domain results in higher errors and variability. The minimum likelihood of rainfall shadows showed that 50% of the simulated images have at least 3% of their significant rainfall shadowed. In addition, 25% of the images had an ARS of over 45km^2, with the minimum LARS of 5% of the images exceeding an area of 50km^2. This gap would result in underestimation of potential impacts of flooding. This highlights the importance of gaining an improved understanding of rainfall shadows and provides the motivation for this project and future research in this area. Weather radar alone cannot be used for rainfall estimation, as information is regularly missed. 5.1Impact and transferability Improved high-resolution rainfall estimates are needed for flood forecasting by stakeholders and water managers, particularly in (near) real time, for nowcasting and probable ensemble forecasting. The radar error model outlined in this study produces visually realistic radar images, capturing key properties of radar images, with many potential uses for the model framework, some of which are outlined below: 1. Identification of radar rainfall properties which correspond to high errors and uncertainties could be extended and used in a probabilistic manner to better condition merged rainfall fields. Identify areas where the spatial distribution of the rainfall cannot be trusted (i.e. occasions where rainfall shadows are likely, such as areas past high-intensity rainfall). This includes information on the frequency and location of rainfall shadows as a merging criterion (i.e. putting more weight on rainfall information from other sources when rainfall shadows are likely to occur at a given location). 2. Apply the model to gridded rain gauge fields or forecasts, for comparison with the corresponding weather radar images, to better identify and understand radar rainfall errors. 3. The importance of complex and efficient radar gauge merging methods is emphasized in this study. These methods do not trust the spatial distribution of rainfall provided by weather radars alone. Additional information from other sensors is needed, such as opportunistic sensors, citizen science data, rain gauges and microwave links. This study has provided a framework for methods assessing the performance of these merging techniques. 4. Determine the optimal locations for weather radars or rain gauges, such as establishing rain gauge networks in areas where rainfall shadows are more likely and not in densely populated areas (e.g. cities), where it may be too late for warnings on missing information. 5. Assess how radar rainfall errors propagate into hydrological and hydrodynamic modelling, considering the impact that the incorrect distribution of rainfall has on discharge and flood depths. 5.2Limitations and future work A powerful framework for investigating radar rainfall errors has been developed and demonstrated, with model design allowing for a high degree of flexibility and several natural extensions. The influence of different correction methods for the radar rainfall estimation process and the impact this has on the error structure should be investigated using the methodology in this study. Some error sources in radar rainfall estimation are not included in the radar error model, as they were beyond the scope of this study. To improve the model, additional sources of error could easily be included (e.g. radar calibration errors using an additive error (dBZ) and bright-band effects using a vertical representation of different weather types and seasons of events). Mountainous regions are typically subject to more errors due to beam blockage from topography. However, this could quite easily be included in the model through an additive error based on existing clutter maps from the weather radar of interest. It would be interesting to repeat the study with different DSD structures, changing the correlation structure and marginal distribution of this (previously Gaussian) field. A dependence between DSD parameters could be imposed, as a varying Z–R relationship in space and time improved rainfall accumulations at the event scale (Libertino et al., 2015). Close to the radar, measurement volumes are small, systematically increasing in size with distance from the radar. Although as part of the radar error model the spatial sampling aspect is considered through the estimation variance, radar measurements are taken to be instantaneous (as opposed to rain gauge measurements, which are temporal aggregations by definition). The implications of these space–time sampling properties mean that temporally we only have a snapshot of the pixel behaviour at a given time. Correlation structure variability in space and time was incorporated through a spatio-temporal anisotropy factor, without explicitly accounting for the different data sampling between the two dimensions. The simulation environment could be modified to account for the temporal sampling issue by simulating temporally at a higher resolution than existing radar images and sampling these to reproduce the snapshot effect. Methods could also be developed within an inverse modelling framework (Grundmann et al., 2019) to obtain field uncertainty in (near) real time. 5.3Concluding remarks The overarching aim of this study is to contribute towards improvements in the radar rainfall estimation process by gaining an improved understanding of the frequency and location of the error structure relating to the process. With this in mind, we explore and exploit space–time properties of rainfall and reflectivity to gain an improved understanding of the error structure between the two, investigating the extent of uncertainties in the radar rainfall estimation process. This study has presented an innovative model for investigating uncertainties in the radar rainfall estimation process, providing a flexible tool that has many potential future applications. The radar error model, outlined in detail, generates a stochastic ensemble of radar images corresponding to an existing rainfall field by inverting the radar rainfall estimation process. This model incorporates many different error sources, including the drop-size distribution, attenuation effects, random noise and radar sampling. This provides a method for identifying when and where radar errors are likely to occur and how often information about the rainfall field is lost, significantly impacting the spatial rainfall field. The insights from this study provide an improved understanding of the error structure between rainfall and reflectivity, together with the extent of uncertainties in the radar estimation process. A framework has been provided to investigate the impact of errors relating to the radar rainfall estimation process, with many potential hydrological applications. ACG: conceptualization, methodology, software, formal analysis, investigation, data curation, writing – original draft, visualization. CK: conceptualization, methodology, writing – review and editing, supervision. AB: conceptualization, methodology, supervision. At least one of the (co-)authors is a member of the editorial board of Hydrology and Earth System Sciences. The peer-review process was guided by an independent editor, and the authors also have no other competing interests to declare. Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims made in the text, published maps, institutional affiliations, or any other geographical representation in this paper. While Copernicus Publications makes every effort to include appropriate place names, the final responsibility lies with the authors. This work is funded by the Natural Environment Research Council (NERC)-sponsored Data, Risk and Environmental Analytical Methods (DREAM) Centre for Doctoral Training in risk and mitigation research using big data. We thank two anonymous referees for their detailed and constructive reviews which have resulted in significant improvements to the paper. This research has been supported by the Natural Environment Research Council (grant no. NE/M009009/1). This paper was edited by Efrat Morin and reviewed by two anonymous referees. AghaKouchak, A., Bárdossy, A., and Habib, E.: Conditional simulation of remotely sensed rainfall data using a non-Gaussian v-transformed copula, Adv. Water Resour., 33, 624–634, https://doi.org/ 10.1016/j.advwatres.2010.02.010, 2010a.a AghaKouchak, A., Habib, E., and Bárdossy, A.: A comparison of three remotely sensed rainfall ensemble generators, Atmos. Res., 98, 387–399, https://doi.org/10.1016/j.atmosres.2010.07.016, 2010b. a Atlas, D. and Banks, H. C.: The Interpretation of Microwave Reflections From Rainfall, J. Meteorol., 8, 271–282, https://doi.org/10.1175/1520-0469(1951)008<0271:tiomrf>2.0.co;2, 1951.a Battan, L. J. and Theiss, J. B.: Wind Gradients and Variance of Doppler Spectra in Showers Viewed Horizontally, J. Appl. Meteorol., 12, 688–693, https://doi.org/10.1175/1520-0450(1973)012 <0688:wgavod>2.0.co;2, 1973.a Berne, A. and Uijlenhoet, R.: Quantitative analysis of X-band weather radar attenuation correction accuracy, Nat. Hazards Earth Syst. Sci., 6, 419–425, https://doi.org/10.5194/nhess-6-419-2006, Berne, A., Delrieu, G., Creutin, J. D., and Obled, C.: Temporal and spatial resolution of rainfall measurements required for urban hydrology, J. Hydrol., 299, 166–179, https://doi.org/10.1016/ j.jhydrol.2004.08.002, 2004.a Berne, A., Delrieu, G., and Andrieu, H.: Estimating the vertical structure of intense Mediterranean precipitation using two X-band weather radar systems, J. Atmos. Ocean. Tech., 22, 1656–1675, https: //doi.org/10.1175/JTECH1802.1, 2005.a Cecinati, F., Rico-Ramirez, M. A., Heuvelink, G. B., and Han, D.: Representing radar rainfall uncertainty with ensembles based on a time-variant geostatistical error modelling approach, J. Hydrol., 548, 391–405, https://doi.org/10.1016/j.jhydrol.2017.02.053, 2017.a Ciach, G. J. and Gebremichael, M.: Empirical Distribution of Conditional Errors in Radar Rainfall Products, Geophys. Res. Lett., 47, 1–8, https://doi.org/10.1029/2020GL090237, 2020.a Ciach, G. J., Krajewski, W. F., and Villarini, G.: Product-error-driven uncertainty model for probabilistic quantitative precipitation estimation with NEXRAD data, J. Hydrometeorol., 8, 1325–1347, https://doi.org/10.1175/2007JHM814.1, 2007.a Crane, R. K.: Automatic Cell Detection and Tracking, IEEE T. Geosci. Elect., 17, 250–262, https://doi.org/10.1109/TGE.1979.294654, 1979.a De Vos, L., Leijnse, H., Overeem, A., and Uijlenhoet, R.: The potential of urban rainfall monitoring with crowdsourced automatic weather stations in Amsterdam, Hydrol. Earth Syst. Sci., 21, 765–777, https://doi.org/10.5194/hess-21-765-2017, 2017.a Gabella, M. and Notarpietro, R.: ERAD 2002 Ground clutter characterization and elimination in mountainous terrain, Proceedings of ERAD, 305–311, https://www.copernicus.org/erad/online/erad-305.pdf (last access: 16 October 2024), 2002.a Gires, A., Onof, C., Maksimovic, C., Schertzer, D., Tchiguirinskaia, I., and Simoes, N.: Quantifying the impact of small scale unmeasured rainfall variability on urban runoff through multifractal downscaling: A case study, J. Hydrol., 442–443, 117–128, https://doi.org/10.1016/j.jhydrol.2012.04.005, 2012.a Green, A. C.: Radar rainfall errors, Zenodo [data set], https://doi.org/10.5281/zenodo.8029394, 2023.a Green, A. C.: RadErr, GitHub [code], https://github.com/amyycb/raderr) (last access: 16 October 2024), 2024.a Green, A. C., Kilsby, C., and Bardossy, A.: A framework for space-time modelling of rainfall events for hydrological applications of weather radar, J. Hydrol., 630, 130630, https://doi.org/10.1016/ j.jhydrol.2024.130630, 2024.a, b, c, d Grundmann, J., Hörning, S., and Bárdossy, A.: Stochastic reconstruction of spatio-Temporal rainfall patterns by inverse hydrologic modelling, Hydrol. Earth Syst. Sci., 23, 225–237, https://doi.org/ 10.5194/hess-23-225-2019, 2019.a Hall, W., Rico-Ramirez, M. A., and Krämer, S.: Classification and correction of the bright band using an operational C-band polarimetric radar, J. Hydrol., 531, 248–258, https://doi.org/10.1016/ j.jhydrol.2015.06.011, 2015.a Harrison, D., Norman, K., Darlington, T., Adams, D., Husnoo, N., and Sandford, C.: The Evolution Of The Met Office Radar Data Quality Control And Product Generation System: RADARNET, in: 37th Conference on Radar Meteorology, 14–18- September 2015, Embassy Suites Conference Center, Norman, Oklahoma, p. 14B.2, https://ams.confex.com/ams/37RADAR/webprogram/Manuscript/Paper275684/ RadarnetNextGeneration_AMS_2015.pdf (last access: 16 October 2024), 2017.a Harrison, D. L., Driscoll, S. J., and Kitchen, M.: Improving precipitation estimates from weather radar using quality control and correction techniques, Meteorol. Appl., 7, 135–144, 2000.a, b Hasan, M. M., Sharma, A., Johnson, F., Mariethoz, G., and Seed, A.: Correcting bias in radar Z–R relationships due to uncertainty in point rain gauge networks, J. Hydrol., 519, 1668–1676, https:// doi.org/10.1016/j.jhydrol.2014.09.060, 2014.a Hasan, M. M., Sharma, A., Mariethoz, G., Johnson, F., and Seed, A.: Improving radar rainfall estimation by merging point rainfall measurements within a model combination framework, Adv. Water Resour., 97, 205–218, https://doi.org/10.1016/j.advwatres.2016.09.011, 2016.a Hooper, J. E. N. and Kippax, A. A.: The bright band – a phenomenon associated with radar echoes from falling rain, Q. J. Roy. Meteorol. Soc., 76, 125–132, https://doi.org/10.1002/qj.49707632803, Jacobi, S. and Heistermann, M.: Benchmarking attenuation correction procedures for six years of single-polarized C-band weather radar observations in South-West Germany, Geomatics, Nat. Hazards Risk, 7, 1785–1799, https://doi.org/10.1080/19475705.2016.1155080, 2016.a, b, c, d, e Kitchen, M., Brown, R., and Davies, A. G.: Real‐time correction of weather radar data for the effects of bright band, range and orographic growth in widespread precipitation, Q. J. Roy. Meteorol. Soc., 120, 1231–1254, https://doi.org/10.1002/qj.49712051906, 1994.a Krämer, S.: Quantitative Radardatenaufbereitung für die Niederschalgsvorhersage und die Siedlungsentwässerung, Leibniz Universität Hannover, Mitteilungen des Instituts für Wasserwirtschaft, Hydrologie und landwirtschaftlichen Wasserbau, Heft 92, p. 392, 2008.a Krämer, S. and Verworn, H.-R.: Improved C-band radar data processing for real time control of urban drainage systems, in: 11th International Conferenc on Urban Drainage, 23–26- September 2018, 1–10, https://doi.org/10.2166/wst.2009.282, 2008.a Lee, G. W., Seed, A. W., and Zawadzki, I.: Modeling the variability of drop size distributions in space and time, J. Appl. Meteorol. Clim., 46, 742–756, https://doi.org/10.1175/JAM2505.1, 2007.a Li, Y., Zhang, G., Doviak, R. J., Lei, L., and Cao, Q.: A new approach to detect ground clutter mixed with weather signals, IEEE T. Geosci. Remote, 51, 2373–2387, https://doi.org/10.1109/ TGRS.2012.2209658, 2013.a Libertino, A., Allamano, P., Claps, P., Cremonini, R., and Laio, F.: Radar estimation of intense rainfall rates through adaptive calibration of the Z–R relation, Atmosphere, 6, 1559–1577, https:// doi.org/10.3390/atmos6101559, 2015.a, b Marshall, J. S. and Palmer, W. M. K.: The Distribution of Raindrops With Size, J. Meteorol., 5, 165–166, https://doi.org/10.1175/1520-0469(1948)005<0165:tdorws>2.0.co;2, 1948.a Meischner, P.: Weather Radar – Principles and Advanced Applications, https://books.google.com/books/about/Weather_Radar.html?id=pnNNi9gD1CIC (last access: 16 October 2024), 2005.a Met Office: Met Office Rain Radar Data from the NIMROD System, NCAS British Atmospheric Data Centre [data set], http://catalogue.ceda.ac.uk/uuid/82adec1f896af6169112d09cc1174499/ (last access: 16 October 2024), 2003.a, b Michelson, D., Einfalt, T., Holleman, I., Gjertsen, U., Friedrich, K., Haase, G., Lindskog, M., and Jurczyk, A.: Weather radar data quality in Europe – quality control and characterization, Publications Office of the European Union Weather radar data quality in Europe – Publications Office of the EU, ISBN 92-898-0018-6, 2005.a Nicol, J. C. and Austin, G. L.: Attenuation correction constraint for single-polarisation weather radar, Meteorol. Appl., 10, 345–354, https://doi.org/10.1017/S1350482703001051, 2003.a Ochoa-Rodriguez, S., Wang, L. P., Gires, A., Pina, R. D., Reinoso-Rondinel, R., Bruni, G., Ichiba, A., Gaitan, S., Cristiano, E., Assel, J. V., Kroll, S., Murlà-Tuyls, D., Tisserand, B., Schertzer, D., Tchiguirinskaia, I., Onof, C., Willems, P., Veldhuis, M. C. T., Van Assel, J., Kroll, S., Murlà-Tuyls, D., Tisserand, B., Schertzer, D., Tchiguirinskaia, I., Onof, C., Willems, P., and Ten Veldhuis, M. C.: Impact of spatial and temporal resolution of rainfall inputs on urban hydrodynamic modelling outputs: A multi-catchment investigation, J. Hydrol., 531, 389–407, https://doi.org/ 10.1016/j.jhydrol.2015.05.035, 2015.a Ośródka, K., Szturc, J., and Jurczyk, A.: Chain of data quality algorithms for 3-D single-polarization radar reflectivity (RADVOL-QC system), Meteorol. Appl., 21, 256–270, https://doi.org/10.1002/ met.1323, 2014.a Pegram, G. G. and Clothier, A. N.: Downscaling rainfields in space and time, using the String of Beads model in time series mode, Hydrol. Earth Syst. Sci., 5, 175–186, https://doi.org/10.5194/ hess-5-175-2001, 2001.a Rico-Ramirez, M. A., Liguori, S., and Schellart, A. N. A.: Quantifying radar-rainfall uncertainties in urban drainage flow modelling, J. Hydrol., 528, 17–28, https://doi.org/10.1016/ j.jhydrol.2015.05.057, 2015.a Savina, M., Schäppi, B., Molnar, P., Burlando, P., and Sevruk, B.: Comparison of a tipping-bucket and electronic weighing precipitation gage for snowfall, in: Rainfall in the Urban Context: Forecasting, Risk and Climate Change, vol. 103, Elsevier B.V., 45–51, https://doi.org/10.1016/j.atmosres.2011.06.010, 2012.a Schleiss, M., Olsson, J., Berg, P., Niemi, T., Kokkonen, T., Thorndahl, S., Nielsen, R., Nielsen, J. E., Bozhinova, D., Pulkkinen, S., Ellerbæk Nielsen, J., Bozhinova, D., and Pulkkinen, S.: The accuracy of weather radar in heavy rain: A comparative study for Denmark, the Netherlands, Finland and Sweden, Hydrol. Earth Syst. Sci., 24, 3157–3188, https://doi.org/10.5194/hess-24-3157-2020, Seo, B. C., Krajewski, W. F., Quintero, F., ElSaadani, M., Goska, R., Cunha, L. K., Dolan, B., Wolff, D. B., Smith, J. A., Rutledge, S. A., and Petersen, W. A.: Comprehensive evaluation of the IFloodS Radar rainfall products for hydrologic applications, J. Hydrometeorol., 19, 1793–1813, https://doi.org/10.1175/JHM-D-18-0080.1, 2018.a Shehu, B. and Haberlandt, U.: Relevance of merging radar and rainfall gauge data for rainfall nowcasting in urban hydrology, J. Hydrol., 594, 125931, https://doi.org/10.1016/j.jhydrol.2020.125931, Smith, J. A. and Krajewski, W. F.: A modeling study of rainfall rate‐reflectivity relationships, Water Resour. Res., 29, 2505–2514, https://doi.org/10.1029/93WR00962, 1993.a Thorndahl, S., Einfalt, T., Willems, P., Nielsen, J. E., Veldhuis, M. C. T., Arnbjerg-Nielsen, K., Rasmussen, M. R., Molnar, P., Ellerbæk Nielsen, J., Ten Veldhuis, M. C., Arnbjerg-Nielsen, K., Rasmussen, M. R., and Molnar, P.: Weather radar rainfall data in urban hydrology, Hydrol. Earth Syst. Sci., 21, 1359–1380, https://doi.org/10.5194/hess-21-1359-2017, 2017. a, b Uijlenhoet, R. and Berne, A.: Stochastic simulation experiment to assess radar rainfall retrieval uncertainties associated with attenuation and its correction, Hydrol. Earth Syst. Sci., 12, 587–601, https://doi.org/10.5194/hess-12-587-2008, 2008.a Ventura, J. F. I. and Tabary, P.: The new French operational polarimetric radar rainfall rate product, J. Appl. Meteorol. Clim., 52, 1817–1835, https://doi.org/10.1175/jamc-d-12-0179.1, 2013.a Villarini, G. and Krajewski, W. F.: Review of the different sources of uncertainty in single polarization radar-based estimates of rainfall, Surv. Geophys., 31, 107–129, https://doi.org/10.1007/ s10712-009-9079-x, 2010.a, b Villarini, G., Seo, B.-C., Serinaldi, F., and Krajewski, W. F.: Spatial and temporal modeling of radar rainfall uncertainties, Atmos. Res., 135, 91–101, 2014.a Yan, J., Li, F., Bárdossy, A., and Tao, T.: Conditional simulation of spatial rainfall fields using random mixing: A study that implements full control over the stochastic process, Hydrol. Earth Syst.Sci., 25, 3819–3835, https://doi.org/10.5194/hess-25-3819-2021, 2021.a
{"url":"https://hess.copernicus.org/articles/28/4539/2024/","timestamp":"2024-11-12T02:35:04Z","content_type":"text/html","content_length":"363989","record_id":"<urn:uuid:a309b7b0-496d-4ca2-8a9b-3d61e6cd7d0d>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00433.warc.gz"}
Find answers for algebra find answers for algebra Related topics: slope worksheets solving equations multiplying dividing algebra calculator with division 1st standared maths trig sum identity worksheets help answer algebra question' dosage calculation formula math problems on calculating a midpoint using the midpoint formula math 101 review sheet for exam #2 Author Message poondredje Posted: Sunday 27th of Oct 21:01 Hello all! I am a novice at find answers for algebra and am just about to go nuts . My grades are going down and I just never seem to understand this topic. Help me guys! Back to top ameich Posted: Monday 28th of Oct 20:42 I could help you if you can be more specific and provide details about find answers for algebra. A good software would be ideal rather than a costly math tutor. After trying a number of software I found the Algebrator to be the best I have so far found . It solves any math problem that you may want solved. It also shows all the steps (of the solution). You can just write it down as your homework . However, you should use it to learn algebra, and simply not use it to copy answers. From: Prague, Czech Back to top Bet Posted: Wednesday 30th of Oct 08:55 I have also used Algebrator a number of times to solve algebra problems . I must say that it has greatly enhanced my problem solving skills. You should try it out and see if it From: kµlt øƒ Ø™ Back to top [Secned©] Posted: Thursday 31st of Oct 14:43 I have tried a number of math programs . I would not name them here, but they were useless. I hope this one is not like the one’s I’ve used in the past . From: Québec, Back to top SanG Posted: Saturday 02nd of Nov 11:02 Click on https://softmath.com/comparison-algebra-homework.html to get it. You will get a great tool at a reasonable price. And if you are not satisfied , they give you back your money back so it’s absolutely great. From: Beautiful Northwest Lower Back to top
{"url":"https://www.softmath.com/algebra-software-2/find-answers-for-algebra.html","timestamp":"2024-11-12T06:45:10Z","content_type":"text/html","content_length":"40572","record_id":"<urn:uuid:7db605e8-883b-4d91-9db9-4eda4b61e03a>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00342.warc.gz"}
Control Dependence for Extended Finite State Machines ^1Department of Computer Science, King’s College London, Strand, London, WC2R 2LS, United Kingdom. {kalliopi.androutsopoulos, david.j.clark, mark.harman, zheng.li}@kcl.ac.uk ^2Bournemouth University, Poole, Dorset, BH12 5BB, United Kingdom. laurie@tratt.net Abstract. Though there has been nearly three decades of work on program slicing, there has been comparatively little work on slicing for state machines. One of the primary challenges that currently presents a barrier to wider application of state machine slicing is the problem of determining control dependence. We survey existing related definitions, introducing a new definition that subsumes one and extends another. We illustrate that by using this new definition our slices respect Weiser slicing’s termination behaviour. We prove results that clarify the relationships between our definition and older ones, following this up with examples to motivate the need for these differences. 1 Introduction Program slicing is a source code analysis technique that identifies the parts of a program’s source code which can affect the computation of a chosen variable at a chosen point in it. The variable and point of interest constitute the slicing criterion. There are many variations on the slicing theme. For instance, slices can be constructed statically (with respect to all possible inputs), dynamically (with respect to a single input) or within some spectrum in-between. Program slicing has proved to be widely applicable, with application areas ranging from program comprehension [HBD03] to reverse engineering and reuse [CCD98]. However, despite thirty years of research, several hundred papers and many surveys on program slicing [BH04, De 01, Tip95], there has been comparatively little work on slicing at the model level. This paper tackles slicing at the model level, particularly static slicing of Finite State Machines (FSMs). FSMs are a graphical formalism that have become widely used in specifications of embedded and reactive systems. Their main drawback is that even moderately complicated systems result in large and unwieldy diagrams. Harel’s Statecharts [Har87] and Extended Finite State Machines (EFSMs) are two of the many attempts over past decades to address FSM’s disadvantages. Work on slicing FSM models began with the work of Heimdahl et al. in the late 1990s [HW97, HTW98], followed by Wang et al. [WDQ02] and then in 2003 by the work of Korel et al. [KSTV03] and more recently by Langenhove and Hoogewijs [LH07] and by Labbé et al. [LGP07, LG08]. One of the challenges facing any attempt to slice an EFSM is the problem of how to correctly account for control dependence. It is common for state machines modelling such things as reactive systems not to have a final computation point or ‘exit node’. To overcome this problem, Ranganath et al. [RAB^+05] recently introduced the concept of a control sink and associated control dependence definitions for reactive programs. A control sink is a strongly connected component from which control flow cannot escape once it enters. Building on this, Labbé et al. [LGP07, LG08] introduced a notion of control dependence and an associated slicing algorithm for EFSMs that was non-termination sensitive. However, they introduce a syntax dependent condition and thus cannot be applied to any However, traditional control dependence, as used in program slicing [HRB90] is non-termination insensitive, with the consequence that the semantics of a program slice dominates the semantics of the program from which it is; slicing may remove non-termination, but it will never introduce it. In moving slicing from the program level to the state based model level, an important choice needs to be “should EFSM slicing be non-termination sensitive or insensitive?” Recent work on control dependence has only considered the non-termination sensitive option [LGP07, LG08]. The non-termination insensitive option was explored by Korel et al. [KSTV03], but only for the restricted class of state machines that guarantee to have an exit state. Heimdahl et al. [HW97, HTW98] have a different notion of control dependence which is not a structural property of the graph of FSMs but is based on the dependency relation between events and generated events. This could lead to slices being either non-termination sensitive or insensitive depending on the specification. The definition of control dependence given in [WDQ02, LH07] is for UML statecharts with nested and concurrent states and is the same as that of data dependence when applied to EFSMs that do not have concurrent and/or nested states. This leaves open the question of how to extend control dependence to create non-termination insensitive slicing for general EFSMs in which there may be no exit node. This problem is not merely of intellectual curiosity as it also has implications for the applications of slicing. In the literature on traditional program slicing, a non-termination sensitive formulation was proposed as early as 1993 by Kamkar [Kam93], but has not been taken up in subsequent slicing research. Non-termination sensitive slicing tends to produce very large slices, because all iterative constructs that cannot be statically determined to terminate must be retained in the slice, no matter whether they have any effect other than termination on the values computed at the slicing criterion. These ‘loop shells’ must be retained in order to respect the definition of non-termination sensitivity. Furthermore, for most of the applications of slicing listed above, it turns out that it is perfectly acceptable for slicing to be non-termination insensitive. In this paper, we introduce a non-termination insensitive form of control dependence for EFSM dependence analysis, that can be applied to any FSM, and a slicing algorithm based upon it. Like Labbé et al., we build on the recent work of Ranganath et al. [RAB^+05], but our definition is non-termination insensitive. Also, unlike Korel’s definition, our development of the recent work of Ranganath et al. allows us to handle arbitrary EFSMs. We prove that our definition of control dependence is backward compatible with traditional non-termination insensitive control dependence outside of control sinks. Furthermore, we prove that our definition agrees with the non-termination sensitive control dependence of Labbé et al. inside control sinks. Finally we demonstrate the type of slices produced with our definition. 2 Extended Finite State Machines We formally define an EFSM as follows. Definition 1 (Extended Finite State Machine). An Extended Finite State Machine (EFSM) E=(S, T, Ev, V) where S is a set of states, T is a set of transitions, Ev is a set of events, and V is a store represented by a set of variables. Transitions have a source state source(t) ∈ S, a target state target(t) ∈ S and a label lbl(t). Transition labels are of the form e[1][g]∕a where e[1] ∈ Ev, g is a guard, i.e. a condition (we assume a standard conditional language) that guards the transition from being taken when an e[1] is true, and a is a sequence of actions (we assume a standard expression language including assignments). All parts of a label are optional. EFSMs are possibly non-deterministic. States of S are atomic. Actions can involve store updates or generation of events or both. A transition t may have a successor t′ whose source is the same as the target of t. Two or more distinct transitions which share the same source node are said to be siblings. A final transition is a transition whose target is an exit state and an exit state is a state which has no outgoing transitions. An ε-transition is one with no event or guard. 3 Survey In this section we survey several existing definitions of control dependence and discuss their strengths and weaknesses. Ranganath et al.’s control dependence definitions [RAB^+05, RAB^+07] are defined for programs of systems with multiple exit points and / or which execute indefinitely, and therefore form the basis for subsequent state machine control dependence definitions. We exclude from this discussion the control dependence definition as given in [WDQ02, LH07] because it is defined in terms of concurrent states and transitions and EFSMs do not have concurrent states and transitions. Moreover, when applied to states and transitions that are not concurrent, it is the same as data dependence as in Definition 13. Korel et al [KSTV03], Ranganath et al. and Labbé et al. [LGP07, LG08] definitions of control dependence are given in terms of execution paths. Since a path is commonly presented as a (possibly infinite) sequence of nodes, a node is in a path if it is in the sequence. A transition is in a path if its source state is in the path and its target state is both in the path and immediately follows its source state. A maximal path is any path that terminates in an end node or final transition, or is infinite. 3.1 Control flow for RSML Heimdahl et al. [HW97, HTW98] present an approach for slicing specifications modelled in the Requirements State Machine Language (RSML) [LHHR94], a tabular notation that is based on hierarchical finite state machines. Transitions have events, guards and actions; events can generate events as actions, which are broadcast in the next step of execution. Heimdahl et al. were the first to present a control dependence-like definition for FSMs; it differs from the traditional notion as it defines control flow in terms of events rather than transitions. Definition 2 (Control flow for RSML (CF) [HTW98]). Let E be the set of all events and T the set of all transitions. The relation trigger(T → E) represents the trigger event of a transition. The relation action(T → E^2) represents the set of events that make up the action caused by executing a transition. follows(T → T) is defined as: (t[1],t[2]) ∈ follows iff trigger(t[1]) ∈ action(t[2]). CF can be applied to non-terminating systems that have multiple exit nodes. However, it depends on transitions being triggered by events and being able to generate events as actions and therefore cannot be applied to any finite state machine, such as EFSMs that do not generate events. 3.2 Control dependence for EFSMs Korel et al. [KSTV03] present a definition of control dependence for EFSMs in terms of post dominance that requires execution paths to lead to an exit state. Definition 3 (Post Dominance [KSTV03]). Let Y and Z be two states and T be an outgoing transition from Y . • State Z post-dominates state Y iff Z is in every path from Y to an exit state. • State Z post-dominates transition T iff Z is on every path from Y to the exit state though T. This can be rephrased as Z post-dominates target(T). Definition 4 (Insensitive Control Dependence (ICD) [KSTV03]). Transition T[k] is control dependent on transition T[i] if: 1. source(T[k]) post-dominates transition T[i] (or target(T[i])), and 2. source(T[k]) does not post-dominate source(T[i]). This definition is successful in capturing the traditional notion of control dependence for static backward slicing. However it can only determine control dependence for state machines with exactly one end state, failing if there are multiple exit states or if the state machine is possibly non-terminating. 3.3 Control dependence for non-terminating programs Ranganath et al. [RAB^+05, RAB^+07] address the issue of determining control dependence for programs utilising Control Flow Graphs (CFGs). A CFG is a labelled, directed graph with a set of nodes that represent statements in a program and edges that represent the control flow. A node is either a statement node (which has a single successor) or a predicate node (which has two successors, labelled with T or F for the true and false cases respectively). A CFG has a start node n[s] (which must have no incoming edges) such that all nodes are reachable from n[s]; it may have a set of end nodes that have no successors. Two versions of control dependence definitions are described: non-termination sensitive and non-termination insensitive control dependence. The difference between these definitions lies in the choice of paths. Non-termination sensitive control dependence is given in terms of maximal paths. Definition 5 (Non-termination Sensitive Control Dependence (NTSCD)). In a CFG, N[i][j] means that a node N[j] is non-termination sensitive control dependent on a node N[i] iff N[i] has at least two successors N[k] and N[l] such that: for all maximal paths π from N[k], where N[j] ∈ π; and there exists a maximal path π[0] from N[l] where N[j] ⁄∈ π[0]. Non-termination insensitive control dependence is given in terms of sink-bounded paths that end in control sinks. A control sink is a region of the graph which, once entered, is never left. These regions are always SCCs, even if only the trivial SCC, i.e. a single node with no successors. Definition 6 (Control Sink). A control sink, , is a set of nodes that form a strongly connected component such that, for each node n in each successor of n is in . Definition 7 (Sink-bounded Paths). A maximal path π is sink-bounded iff there exists a control sink such that: 1. π contains a node from ; 2. if π is infinite then all nodes in occur infinitely often. The second clause of Definition 7 defines a form of fairness and hence we refer to it as the fairness condition. SinkPaths(N) denotes a set of sink-bounded paths from a node N. We now define Ranganath et al. [RAB^+05] non-termination insensitive version of control dependence. Definition 8 (Non-termination Insensitive Control Dependence (NTICD)). In a CFG, T[i][j] means that a node N[j] is non-termination insensitive control dependent on a node N[i] iff N[i] has at least two successors N[k] and N[l] such that: 1. for all paths π ∈ SinkPaths(N[k]) where N[j] ∈ π; 2. there exists a path π[0] ∈ SinkPaths(N[l]) where N[j] ⁄∈ π[0]. The difference between paths in NTSCD and NTICD is shown in Figure 1. According to Definition 5, n1n1n1n4 is not in all maximal paths as there is a maximal path with an infinite loop, i.e. {n2 → n3 → n2...}. However, n1,n3,n4 since n2,n3 and n4 occur on all sink-bounded paths from n2 (the control sink for these paths is n4) and there exists a sink bounded path from n5 (the control sink consists of n5,n6,n7) which does not include n2,n3 and n4. Compared to NTSCD, NTICD cannot calculate any control dependencies within control sinks. For example, in Figure 1, n5 3.4 Control dependence for communicating automata Labbé et al. [LG08] adapt Ranganath et al.’s NTSCD definition for communicating automata, in particular focusing on Input/Output Symbolic Transition Systems (IOSTS) [GGRT06]. Definition 9 (Labb? et al.- Non-Termination Sensitive Control Dependence (LG-NTSCD) [LG08]). For an IOSTS S, a transition T[j] is control dependent on a transition T[i] if T[i] has a sibling transition T[k] such that: 1. T[i] has a non-trivial guard, i.e. a guard whose value is not constant under all variable valuations; 2. for all maximal paths π from T[i], the source of T[j] belongs to π; 3. there exists a maximal path π[0] from T[k] such that the source of T[j] does not belong to π[0]. FSM models differ from CFGs is several ways. For example, FSMs can have multiple start and exit nodes, more than two edges between two states and more than two successors from a state. Moreover, in CFGs, decisions (Boolean conditions) are made at the predicate nodes while in state machines they are made on transitions. Labbé et al. take such differences into account when adapting NTSCD. For example in Figure 2 T2T35 the maximal paths start from start. However these control dependencies are non-sensical because T2 and T3 are sibling transitions. Using LG-NTSCD these control dependencies do not exist because in the third clause of Definition 9 the maximal paths start from s1. The first clause of LG-NTSCD concerning the non-triviality of guards is introduced in order to avoid a transition being control dependent on transitions that are executed non-deterministically even though they are NTSCD control dependent. Furthermore, because this is a syntax dependent clause, the definition cannot be applied to many FSMs, such as the FSM for the elevator system in Figure 3 that contains transitions with trivial guards. 4 New Control Dependence Definition: UNTICD We define a new control dependence definition by extending Ranganath et al.’s NTICD definition and subsuming Korel et al.’s definition in order to capture a notion of control dependence for EFSMs that has the following properties. First, the definition is general in that it should be applicable to any reasonable FSM language variant. Second, it is applicable to non-terminating FSMs and / or those that have multiple exit states. Third, by choosing FSM slicing to be non-termination insensitive (in order to coincide with traditional program slicing) it produces smaller slices than traditional non-termination sensitive slicing. Following [RAB^+05], the paths that we consider are sink-bounded paths, i.e. those that terminate in a control sink as in Definition 6. Unlike NTICD, the sink-bounded paths are unfair, i.e. we drop the fairness condition in Definition 7. For non-terminating systems this means that control dependence can be calculated within control sinks. Definition 10 (Unfair Sink-bounded Paths). A maximal path π is sink-bounded iff there exists a control sink such that π contains a transition from . Note that a transition is in a path if its source state is in the path and its target state is both in the path and immediately follows its source state. Definition 11 (Unfair Non-termination Insensitive Control Dependence (UNTICD)). T[i][j] means that a transition T[j] is control dependent on a transition T[i] iff T[i] has at least one sibling T[k] such that: 1. for all paths π ∈ UnfairSinkPaths(target(T[i])), the source(T[j]) belongs to π; 2. there exists a path π ∈ UnfairSinkPaths(source(T[k])) such that the source(T[j]) does not belong to π. UNTICD is in essence a version of NTICD modified to EFSMs (rather than CFGs) and given in terms of unfair sink-bounded paths. This means that, unlike in the second clause of Definition 8, sink-bounded paths start from the source of T[k] rather than from the target of T[k] because EFSMS can have many transitions between states and Definition 8 would lead to non-sensical dependences, e.g. in Figure 2 T2T2 does not control T3. 5 Properties of the Control Dependence Relation We prove the following properties for UNTICD: UNTICD subsumes ICD; the transitive closure for the NTICD relation is contained in the transitive closure for the UNTICD relation; and for an EFSM M, UNTICD and NTSCD dependences for all transitions within control sinks are identical. 5.1 UNTICD subsumes ICD Proposition 1 Definition 4 (ICD) is a special case of Definition 11 (UNTICD). Proof. Definition 4 is given in terms of post dominance which considers every path to a unique exit state. Definition 11 is given in terms of sink-bounded paths that terminate in control sinks. The final transition that leads to the exit state is a trivial strongly-connected component that has no successors, and hence is a control sink. Therefore, the paths in ICD are contained in the paths of NTICD, but NTICD is not restricted to these. Moreover, the clauses of definition 4 are the same as the clauses of definition 11. ⊓ ⊔ 5.2 Relation between NTICD and UNTICD’s transitive closures In Theorem 2 we show that the transitive closure of 2) but introduces dependences within control sinks. In order to prove this theorem, we first need to identify the regions in the state machine where dependencies can occur and we do that by considering all the cases in which a transition t1 controls another transition t2, where K, K1, K2 are control sinks: In case (1) both t1 and t2 are not in any control sink K. In case (2) both t1 and t2 are in the same control sink K. In case (3) t1 is in a control sink K1 and t2 is in another control sink K2. In case (4) t1 is not in any control sink and t2 belongs to a control sink. In case (5) t2 does not belong to any control sink and t1 belongs to a control sink. We introduce Definition 12 that defines a descendant of a transition and the Lemma 1 so that we can discard any impossible cases. Definition 12 (Descendent). A descendant of t is a transition related to t by the closure of the successor relation. Proof. By Definition 6 of the control sink and Definition 12 of the descendant relation. ⊓ ⊔ By Lemma 1, cases (3) and (5) are not possible since t1 can only control t2 if t2 is a descendant of t1. Therefore, we only consider cases (1), (2), and (4). When t1 NTICD controls t2, then for each case we write case1^F, case2^F, and case4^F. Similarly when t1 UNTICD t2, then we write case1^U, case2^U, and case4^U. Lemma 2 shows that the control dependences produced by applying UNTICD to transitions outside of the control sink are the same as those produced when applying NTICD, i.e. case1^U = case1^F. Lemma 2. For an EFSM M, NTICD and UNTICD dependences for transitions T outside of the control sink K (where t ∈ T and t ⁄∈ K), are the same. Proof. Let us assume that in an EFSM M, T[j] is NTICD control dependent on T[i] (T[i][j]) and that T[i] and T[j] are outside of the control sink. From Definition 8, T[i] has a sibling transition T[k] such that there exists a path π[k] ∈ SinkPaths(T[k]) where the source(T[j]) does not belong to π[k]. Now suppose that the fairness condition in the definition of sink bounded paths is removed, i.e. Definition 11 holds, then this affects the transitions within the control sink only in that they do not occur infinitely often. The source of T[j] still remains on all paths from T[i] as these are outside of the control sink and π[k] still exists. Therefore, NTICD and UNTICD dependences of transitions outside of the control sink are the same. ⊓ ⊔ The pairwise intersection of case (1), (2), (4) are empty. Therefore the relations can be partitioned as follows: In Theorem 2 we show that the transitive closure of NTICD dependences between transitions within a control sink and between a transition outside of the control sink and a transition within a control sink is a subset of the transitive closure of UNTICD dependences between transitions within a control sink and between a transition outside of the control sink and a transition within a control sink, i.e. (case2^F ∪ case4^F)^* ⊆ (case2^U ∪ case4^U)^*. First we prove the following lemma. Lemma 3. Let A ∩ B = C ∩ D = ∅ and X = A ∪ B while Y = C ∪ D. If A = C then X^*⊆ Y ^* if B^*⊆ D^*. All relations are over the same base set. Proof. (x[1],x[2]) ∈ X^* iff there exists a path π ∈ (x[1],x[2]),(x[2],x[3]),...,(x[n-1],x[n]) so that for two successive members (x[i],x[j]) and (x[k],x[l]) ∈ π, x[j] = x[k], and for all (x[i],x[j]) ∈ X. This constructs the smallest transitive closure of X. We show X^*⊆ Y ^* by induction on the length of the path π in X^*. Base Case: length(π) = 1 then either (x[0],x[1]) ∈ A = C ⊆ (C ∪ D)^* = Y ^* or (x[0],x[1]) ∈ X^* because (x[0],x[1]) ∈ B ⊆ B^*⊆ D^*⊆ (C ∪ D)^* = Y ^* Induction Case: (Inductive Hypothesis (IH)) Let xX^*y because there exists a path π ∈ xX^*x[1]X^*x[2]...X^*y of length N in X^*. Then there exists a path π[1] in Y such that xY ^*y. Let xX^*z because there exists a path π of length N + 1 in X. Then ∃y,z. xX^*yXz and by IH xY ^*y by the same arguments for the base case of yXz then yY ^*z hence xY ^*z. ⊓ ⊔ Proof. *⊆* can also be expressed as the transitive closure for all of the cases: (case1^F ∪ case2^F ∪ case4^F)^*⊆ (case1^U ∪ case2^U ∪ case4^U)^* which is true if: • case1^F = case1^U, i.e. that NTICD and UNTICD dependences between transitions that are not in a control sink are the same, by Lemma 2, and • (case2^F ∪ case4^F)^* ⊆ (case2^U ∪ case4^U)^*, i.e. that the transitive closure of NTICD dependences between transitions within a control sink and a transition outside of the control sink, and between transitions within a control sink is a subset of the transitive closure of UNTICD dependences between transitions within a control sink and a transition outside of the control sink, and between transitions within a control sink. This is true because of Lemma 3. ⊓ ⊔ 5.3 NTSCD and UNTICD dependencies within control sinks Finally, we show that UNTICD and NTSCD are compatible in control sinks. Proof. In a control sink K, if T[i] ∈ K and T[j] ∈ K, then according to Definition 11 sink-bounded paths are reduced to maximal paths, since transitions in K do not occur infinitely often (fairly). This coincides with Definition 5. Therefore, the control dependences produced by UNTICD and NTSCD for transitions within control sinks are equivalent. ⊓ ⊔ 6 Comparison of UNTICD with existing definitions Figure 3 illustrates an EFSM of the door control component, a subcomponent of the elevator control system [SW99]. The door component controls the elevator door, i.e. it opens the door, waits for the passengers to enter or leave the elevator and finally shuts the door. In this section we compute all the control dependencies for this EFSM using the existing and new definitions for the purpose of comparison, as given in Figure 4. CF No dependences as the EFSM does not have generated events ICD Not applicable as the EFSM does not have a unique exit state NTSCD wait → closing closing → closed closed → opening opening → opened opened → closing closing → opening NTICD No dependences LG-NTSCD T3 → T4,T5,T6 UNTICD T5 → T9,T10 T6 → T7,T8 T8 → T9,T10 T10 → T11,T12 T12 → T4,T5,T6 CF cannot be applied to the EFSM in Figure 3 because it is given in terms of the relationship between events and generated events and according to the syntax of EFSMs, events cannot be generated. ICD cannot be applied to the the EFSM in Figure 3 because it does not have a unique exit state. For EFSMs that lead to a unique exit state the control dependences computed for both ICD and UNTICD are the same. For example, in Figure 2, ICD and UNTICD compute the same dependences, i.e. T1 → T2,T3,T4,T5. In Figure 4, NTSCD and NTICD are given in terms of nodes but can easily be represented in terms of transitions. Compared to UNTICD, NTSCD considers maximal paths rather than sink-bounded paths and consequently introduces more dependences when there are loops on paths that lead to a control sink. For example, in Figure 3, waitclosing because of the loop introduced by the self-transition T2. Note that NTSCD and UNTICD have the same dependences inside control sinks—we have formally shown this to be true in Theorem 3. In Figure 3 there are no NTICD dependences because any control dependency caused by loops on paths to a control sink are ignored and there are no control dependencies within control sinks because of the fairness condition of sink-bounded paths. Unlike NTICD, UNTICD calculates dependences with control sinks. Also, as formally shown by Theorem 2, the transitive closure of NTICD is contained within the transitive closure of UNTICD, although trivially true in this case. LG-NTSCD is NTSCD adapted for transitions and with a syntax dependent clause, i.e. that the controlling transition’s guard must be non-trivial. This additional clause reduces the number of dependences compared to those of NTSCD. For example, in Figure 3, T5,T6,T8,T10 and T12 do not control any other transition because they have trivial guards. The transitive closure of LG-NTSCD as for slicing, could produce too few results to be useful. 7 EFSM Slicing with UNTICD Backward static program slicing was first introduced by Weiser [Wei81] and describes a source code analysis technique that, through dependence relations, identifies all the statements in the program that influence the computation of a chosen variable and point in the program, i.e. the slicing criterion. It is non-termination insensitive. Similarly, EFSM slicing identifies those transitions which affect the slicing criterion, by computing control dependence and data dependence. Data dependence is a definition-clear path between a variable’s definition and use. We adopt the data dependence definition of [KSTV03] for an EFSM. Definition 13 (Data Dependence (DD)). T[i]vT[k] means that transitions T[i] and T[k] are data dependent with respect to variable v if: 1. v ∈ D(T[i]), where D(T[i]) is a set of variables defined by transition T[i], i.e. variables defined by actions and variables defined by the event of T[i] that are not redefined in any action of T 2. v ∈ U(T[k]), where U(T[i]) is a set of variables used in a condition and actions of transition T[i]; 3. there exists a path in an EFSM from the source(T[i]) to the target(T[k]) whereby v is not modified. The data dependences for the door controller EFSM in Figure 3 are: {T1 → T2,T3}, {T2 → T2,T3}, {T5 → T11}, {T8 → T11}, and {T11 → T11}. Definition 14 (Slicing Criterion). A slicing criterion for an EFSM is a pair (t,V ) where transition t ∈ T and variable set V ⊆ V ar. It designates the point in the evaluation immediately after the execution of the action contained in transition t. Definition 15 (Slice). A slice of an EFSM M, is an EFSM, M′, that contains ε-transitions. The transitions that are not ε-transitions are in the set of transitions that are directly or indirectly (transitive closure) DD and UNTICD on the slicing criterion c. 7.1 Computing EFSM slices The objective of the slicing algorithm is to automatically compute the slice of an EFSM model M with respect to the given slicing criterion c. First, the algorithm computes the data dependences, using Definition 13, and the control dependences, using Definition 11, for all transitions in M. These are then represented in a dependence graph, which is a directed graph where nodes represent transitions and edges represent data and control dependences between transitions. Then, given the slicing criterion c, the algorithm marks all backwardly reachable transitions from c, i.e. the transitive closure of DD and UNTICD with respect to c. All unmarked transitions are anonymised i.e. become ε-transitions. Note that we can replace UNTICD, with NTICD, LG-NTSCD and NTSCD in order to compare the different slices produced. If the slicing criterion for the EFSM in Figure 3 is T11, then Figure 5(a) illustrates the slice produced when using UNTICD, and Figure 5(b) illustrates the slice produced when using LG-NTSCD. Unlike LG-NTSCD and NTSCD, UNTICD slicing slices away transitions which are affected by loops (before control sinks) that do not data dependent on T11, i.e. T3. Moreover, there are no LG-NTSCD dependences within the control sink because the transitions have trivial guards. Trivial guards in Figure 3 do not affect whether T10 and T9 will be taken non-deterministically, so in the case where event opening occurs infinitely, T11 is never reached. If the slicing criterion for the EFSM in Figure 3 is T12, then the marked transitions in the UNTICD slice are {T5,T10,T12}, while in the LG-NTSCD slice are {T3,T12}, in the NTSCD slice are {T3,T5,T10,T12} and in the NTICD slice is {T12}. 8 Conclusions In this paper, we introduced a non-termination insensitive form of control dependence for EFSM slicing, that built on the recent work of Ranganath et al. [RAB^+05] and subsumed Korel et al’s definition [KSTV03]. We demonstrated that by removing the fairness condition of Ranganath et al.’s NTICD no control dependences were removed, but extra control dependences within control sinks were introduced. Unlike NTICD our new definition works with non-terminating systems and, in general, produces smaller slices than those based on NTSCD. [BH04] David Binkley and Mark Harman. A survey of empirical results on program slicing. Advances in Computers, 62:105–178, 2004. [CCD98] Gerardo Canfora, Aniello Cimitile, and Andrea De Lucia. Conditioned program slicing. Information and Software Technology, 40(11):595–607, 1998. [De 01] Andrea De Lucia. Program slicing: Methods and applications. In International Workshop on Source Code Analysis and Manipulation, pages 142–149, Los Alamitos, California, USA, 2001. IEEE Computer Society Press. [GGRT06] Christophe Gaston, Pascale Le Gall, Nicolas Rapin, and Assia Touil. Symbolic execution techniques for test purpose definition. In Proceedings of Testing of Communicating Systems, pages 1–18. Springer, 2006. [Har87] David Harel. Statecharts: A visual formalism for complex systems. Science of Computer Programming, 8(3):231–274, June 1987. [HBD03] Mark Harman, David Binkley, and Sebastian Danicic. Amorphous program slicing. Journal of Systems and Software, 68(1):45–64, October 2003. [HRB90] Susan Horwitz, Thomas Reps, and David Binkley. Interprocedural slicing using dependence graphs. ACM Transactions on Programming Languages and Systems, 12(1):26–61, 1990. [HTW98] Mats P. E. Heimdahl, Jeffrey M. Thompson, and Michael W. Whalen. On the effectiveness of slicing hierarchical state machines: A case study. In EUROMICRO ’98: Proceedings of the 24th Conference on EUROMICRO, page 10435, Washington, DC, USA, 1998. IEEE Computer Society. [HW97] Mats P. E. Heimdahl and Michael W. Whalen. Reduction and slicing of hierarchical state machines. In Proc. Fifth ACM SIGSOFT Symposium on the Foundations of Software Engineering. Springer–Verlag, 1997. [Kam93] Mariam Kamkar. Interprocedural dynamic slicing with applications to debugging and testing. PhD Thesis, Department of Computer Science and Information Science, Link?ping University, Sweden, [KSTV03] Bogdan Korel, Inderdeep Singh, Luay Tahat, and Boris Vaysburg. Slicing of state-based models. In Proceedings of the International Conference on Software Maintenance, pages 34–43, 2003. [LG08] Sébastien Labbé and Jean-Pierre Gallois. Slicing communicating automata specifications: polynomial algorithms for model reduction. Formal Aspects of Computing, 2008. [LGP07] Sebastien Labbe, Jean-Pierre Gallois, and Marc Pouzet. Slicing communicating automata specifications for efficient model reduction. In Proceedings of ASWEC, pages 191–200, USA, 2007. IEEE Computer Society. [LH07] Sara Van Langenhove and Albert Hoogewijs. SV[t]L: System verification through logic tool support for verifying sliced hierarchical statecharts. In Lecture Notes in Computer Science, Recent Trends in Algebraic Development Techniques, pages 142–155. Springer Berlin / Heidelberg, 2007. [LHHR94] N.G. Leveson, M.P.E. Heimdahl, H. Hildreth, and J.D. Reese. Requirements Specification for Process-Control Systems. IEEE Transactions on Software Engineering, 20(9):684–706, September [RAB^+05] Venkatesh Prasad Ranganath, Torben Amtoft, Anindya Banerjee, Matthew B. Dwyer, and John Hatcliff. A new foundation for control-dependence and slicing for modern program structures. In ESOP, pages 77–93, 2005. [RAB^+07] Venkatesh Prasad Ranganath, Torben Amtoft, Anindya Banerjee, John Hatcliff, and Matthew B. Dwyer. A new foundation for control dependence and slicing for modern program structures. ACM Trans. Program. Lang. Syst., 29(5):27, 2007. [SW99] Frank Strobl and Alexander Wisspeintner. Specification of an elevator control system – an autofocus case study. Technical Report TUM-I9906, Technische Univerität München, 1999. [Tip95] Frank Tip. A survey of program slicing techniques. Journal of Programming Languages, 3(3):121–189, September 1995. [WDQ02] Ji Wang, Wei Dong, and Zhi-Chang Qi. Slicing hierarchical automata for model checking UML statecharts. In Proceedings of the 4th ICFEM, pages 435–446, UK, 2002. Springer-Verlag. [Wei81] Mark Weiser. Program slicing. In 5^th International Conference on Software Engineering, pages 439–449, San Diego, CA, March 1981.
{"url":"https://tratt.net/laurie/research/pubs/html/androutsopoulos_clark_harman_li_tratt__control_dependence_for_extended_finite_state_machines/","timestamp":"2024-11-08T02:39:45Z","content_type":"text/html","content_length":"115664","record_id":"<urn:uuid:3c6a9194-546c-4822-aead-7d4581b228d9>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00415.warc.gz"}
Rabi oscillation experiment | Cirq | Google Quantum AI import cirq import recirq except ImportError: !pip install -U pip !pip install --quiet cirq !pip install --quiet git+https://github.com/quantumlib/ReCirq import cirq import recirq import numpy as np import cirq_google In this experiment, you are going to use Cirq to check that rotating a qubit by an increasing angle, and then measuring the qubit, produces Rabi oscillations. This requires you to do the following 1. Prepare the \(|0\rangle\) state. 2. Rotate by an angle \(\theta\) around the \(X\) axis. 3. Measure to see if the result is a 1 or a 0. 4. Repeat steps 1-3 \(k\) times. 5. Report the fraction of \(\frac{\text{Number of 1's} }{k}\) found in step 3. 1. Getting to know Cirq Cirq emphasizes the details of implementing quantum algorithms on near term devices. For example, when you work on a qubit in Cirq you don't operate on an unspecified qubit that will later be mapped onto a device by a hidden step. Instead, you are always operating on specific qubits at specific locations that you specify. Suppose you are working with a 54 qubit Sycamore chip. This device is included in Cirq by default. It is called cirq_google.Sycamore, and you can see its layout by printing it. working_device = cirq_google.Sycamore (0, 5)───(0, 6) │ │ │ │ (1, 4)───(1, 5)───(1, 6)───(1, 7) │ │ │ │ │ │ │ │ (2, 3)───(2, 4)───(2, 5)───(2, 6)───(2, 7)───(2, 8) │ │ │ │ │ │ │ │ │ │ │ │ (3, 2)───(3, 3)───(3, 4)───(3, 5)───(3, 6)───(3, 7)───(3, 8)───(3, 9) │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ (4, 1)───(4, 2)───(4, 3)───(4, 4)───(4, 5)───(4, 6)───(4, 7)───(4, 8)───(4, 9) │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ │ (5, 0)───(5, 1)───(5, 2)───(5, 3)───(5, 4)───(5, 5)───(5, 6)───(5, 7)───(5, 8) │ │ │ │ │ │ │ │ │ │ │ │ │ │ (6, 1)───(6, 2)───(6, 3)───(6, 4)───(6, 5)───(6, 6)───(6, 7) │ │ │ │ │ │ │ │ │ │ (7, 2)───(7, 3)───(7, 4)───(7, 5)───(7, 6) │ │ │ │ │ │ (8, 3)───(8, 4)───(8, 5) (9, 4) For this experiment you only need one qubit and you can just pick whichever one you like. my_qubit = cirq.GridQubit(5, 6) Once you've chosen your qubit you can build circuits that use it. from cirq.contrib.svg import SVGCircuit # Create a circuit with X, Ry(pi/2) and H. my_circuit = cirq.Circuit( # Rotate the qubit pi/2 radians around the X axis. cirq.rx(np.pi / 2).on(my_qubit), # Measure the qubit. cirq.measure(my_qubit, key="out"), Now you can simulate sampling from your circuit using cirq.Simulator. sim = cirq.Simulator() samples = sim.sample(my_circuit, repetitions=10) You can also get properties of the circuit, such as the density matrix of the circuit's output or the state vector just before the terminal measurement. state_vector_before_measurement = sim.simulate(my_circuit[:-1]) sampled_state_vector_after_measurement = sim.simulate(my_circuit) print(f"State before measurement:") print(f"State after measurement:") State before measurement&colon; measurements&colon; (no measurements) qubits&colon; (cirq.GridQubit(5, 6),) output vector&colon; 0.707|0⟩ - 0.707j|1⟩ output vector&colon; |⟩ State after measurement&colon; measurements&colon; out=0 qubits&colon; (cirq.GridQubit(5, 6),) output vector&colon; |0⟩ output vector&colon; |⟩ You can also examine the outputs from a noisy environment. For example, an environment where 10% depolarization is applied to each qubit after each operation in the circuit: noisy_sim = cirq.DensityMatrixSimulator(noise=cirq.depolarize(0.1)) noisy_post_measurement_state = noisy_sim.simulate(my_circuit) noisy_pre_measurement_state = noisy_sim.simulate(my_circuit[:-1]) print("Noisy state after measurement:" + str(noisy_post_measurement_state)) print("Noisy state before measurement:" + str(noisy_pre_measurement_state)) Noisy state after measurement&colon;measurements&colon; out=0 qubits&colon; (cirq.GridQubit(5, 6),) final density matrix&colon; [[0.9333333 +0.j 0. +0.j] [0. +0.j 0.06666666+0.j]] final density matrix&colon; Noisy state before measurement&colon;measurements&colon; (no measurements) qubits&colon; (cirq.GridQubit(5, 6),) final density matrix&colon; [[0.49999994+0.j 0. +0.43333334j] [0. -0.43333334j 0.49999994+0.j ]] final density matrix&colon; 2. Parameterized Circuits and Sweeps Now that you have some of the basics end to end, you can create a parameterized circuit that rotates by an angle \(\theta\): import sympy theta = sympy.Symbol("theta") parameterized_circuit = cirq.Circuit( cirq.rx(theta).on(my_qubit), cirq.measure(my_qubit, key="out") In the above block you saw that there is a sympy.Symbol that you placed in the circuit. Cirq supports symbolic computation involving circuits. What this means is that when you construct cirq.Circuit objects you can put placeholders in many of the classical control parameters of the circuit which you can fill with values later on. Now if you wanted to use cirq.simulate or cirq.sample with the parameterized circuit you would also need to specify a value for theta. sim.sample(parameterized_circuit, params={theta: 2}, repetitions=10) You can also specify multiple values of theta, and get samples back for each value. sim.sample(parameterized_circuit, params=[{theta: 0.5}, {theta: np.pi}], repetitions=10) Cirq has shorthand notation you can use to sweep theta over a range of values. params=cirq.Linspace(theta, start=0, stop=np.pi, length=5), The result value being returned by sim.sample is a pandas.DataFrame object. Pandas is a common library for working with table data in python. You can use standard pandas methods to analyze and summarize your results. import pandas big_results = sim.sample( params=cirq.Linspace(theta, start=0, stop=np.pi, length=20), # big_results is too big to look at. Plot cross tabulated data instead. pandas.crosstab(big_results.theta, big_results.out).plot() <Axes&colon; xlabel='theta'> 3. The ReCirq experiment ReCirq comes with a pre-written Rabi oscillation experiment recirq.benchmarks.rabi_oscillations, which performs the steps outlined at the start of this tutorial to create a circuit that exhibits Rabi Oscillations or Rabi Cycles. This method takes a cirq.Sampler, which could be a simulator or a network connection to real hardware, as well as a qubit to test and two iteration parameters, num_points and repetitions. It then runs repetitions many experiments on the provided sampler, where each experiment is a circuit that rotates the chosen qubit by some \(\theta\) Rabi angle around the \(X\) axis (by applying an exponentiated \(X\) gate). The result is a sequence of the expected probabilities of the chosen qubit at each of the Rabi angles. import datetime from recirq.benchmarks import rabi_oscillations result = rabi_oscillations( sampler=noisy_sim, qubit=my_qubit, num_points=50, repetitions=10000 <Axes&colon; xlabel='Rabi Angle (Radian)', ylabel='Excited State Probability'> Notice that you can tell from the plot that you used the noisy simulator you defined earlier. You can also tell that the amount of depolarization is roughly 10%. 4. Exercise: Find the best qubit As you have seen, you can use Cirq to perform a Rabi oscillation experiment. You can either make the experiment yourself out of the basic pieces made available by Cirq, or use the prebuilt experiment Now you're going to put this knowledge to the test. There is some amount of depolarizing noise on each qubit. Your goal is to characterize every qubit from the Sycamore chip using a Rabi oscillation experiment, and find the qubit with the lowest noise according to the secret noise model. import hashlib class SecretNoiseModel(cirq.NoiseModel): def noisy_operation(self, op): # Hey! No peeking! q = op.qubits[0] v = hashlib.sha256(str(q).encode()).digest()[0] / 256 yield cirq.depolarize(v).on(q) yield op secret_noise_sampler = cirq.DensityMatrixSimulator(noise=SecretNoiseModel()) q = list(cirq_google.Sycamore.metadata.qubit_set)[3] print("qubit", repr(q)) rabi_oscillations(sampler=secret_noise_sampler, qubit=q).plot() qubit cirq.GridQubit(6, 1) <Axes&colon; xlabel='Rabi Angle (Radian)', ylabel='Excited State Probability'>
{"url":"https://quantumai.google/cirq/experiments/benchmarks/rabi_oscillations","timestamp":"2024-11-12T04:07:26Z","content_type":"text/html","content_length":"134012","record_id":"<urn:uuid:8ac272b2-d5dc-4e23-996d-bd6f364c57c4>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00176.warc.gz"}
Growing a tree from its branches Given a set L of n disjoint line segments in the plane, we show that it is always possible to form a spanning tree of the endpoints of the segments, such that each line segment is an edge of the tree and the tree has no crossing edges. Such a tree is known as an encompassing tree and can be constructed in O(n log n) time when no three endpoints in L are collinear. In the presence of collinear endpoints, we show first that an encompassing tree with no crossing edges exists and can be computed in O(n^2) time, and second that the maximum degree of a node in the minimum weight spanning tree formed by these line segments is seven, and that there exists a set of line segments achieving this bound. Finally, we show that the complexity of finding the minimum weight spanning tree is optimal Θ(n log n) when we assume that the endpoints of the line segments are in general position. ASJC Scopus subject areas • Control and Optimization • Computational Mathematics • Computational Theory and Mathematics Dive into the research topics of 'Growing a tree from its branches'. Together they form a unique fingerprint.
{"url":"https://nyuscholars.nyu.edu/en/publications/growing-a-tree-from-its-branches","timestamp":"2024-11-09T14:45:55Z","content_type":"text/html","content_length":"48287","record_id":"<urn:uuid:093abc8c-3c6b-4904-adf6-040fefe070a8>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00425.warc.gz"}
Patterns of inflection Find the relationship between the locations of points of inflection, maxima and minima of functions. This problem is a good place to put into action the analogies for calculus explored in Calculus Analogies, so you might wish to consider that problem first Draw sets of coordinate axes $x-y$ and sketch a few simple smooth curves with varying numbers of turning points. Mark on each curve the approximate locations of the maxima $M$, the minima $m$ and the points of inflection $I$. In each case list the order in which the maxima, minima and points of inflection occur along the curve. What patterns do you notice in these orders? Make a conjecture. Test out your conjecture on the cubic equations 3x^3-6x^2+9x+11\quad\mbox{ and } 2x^3-5x^2-4x Prove your conjecture for any cubic equation. Extension: Consider the same problem for polynomials of order 4 or greater. Getting Started A point of inflection of a curve $y=f(x)$ is a point at which the second derivative $\frac{d^2y}{dx^2}$ changes sign. Geometrically, you can think of a point of inflection as a point where the tangent to the curve crosses the curve. Points of inflection need not also be stationary points (first derivative also zero), although they might be. Student Solutions A First Look In the first graph below, we have a cubic with two turning points and one point of inflection. The blue dot indicates a point of inflection and the red dots indicate maximum/minimum points. A change of inflection occurs when the second derivative of the function changes sign. To show there is a blue point in between the two red points, we consider a gradient table of the above curve (try this with any similar looking cubic if you are not sure): We then consider a quintic, as shown below. This has four turning points and three points of inflection, marked with red and blue dots respectively. Call the black points $I_1, I_2, I_3$ going from right to left. We note that $I_1, I_3$, mark a change from concave to covex, while the other point of inflexion, $I_2$, marks a change from convex to A General Idea We postulate that if there is a maximum followed by a minimum, or a minimum followed by a maximum, then there must be a point of inflection in between. We prove this by looking at a general cubic equation f(x) like in the first graph, and treating its derivative as a new function. This new function is zero at points a and c. Thus the derivative function must have a turning point, marked b, between points a and c, and we call this the point of inflection. The existence of b is a consequence of a theorem discovered by By looking at the second graph also, we conjecture that if there are n turning points, then there will be n-1 points of inflection. For example, in the first graph we had 2 turning points and 1 point of inflection. Can you find a way to prove this? A Note on Concavity and Convexity We use the term concavity, to describe the second derivative of the curve. If $\frac {\mathrm{d}^2y}{\mathrm{d}x^2}\geq 0$ then we call that part of the curve convex, and if $\frac {\mathrm{d}^2y}{\ mathrm{d}x^2}\leq 0$ then we call that part of the curve concave. Visually, we can see these definitions by drawing a straight line between any two points on the curve. The function is convex if this line is above the curve, and concave if below. The point of inflection occurs when this line crosses to the other side of the curve. That is, the curve will alternate between convex and concave with the points of change-over being points of Sometimes stationary points and points of inflection coincide, as in the function $y=x^3$. However this is not always the case, as in the first example. Two Cubics We now consider our conjecture in terms of the two cubic equations. 1) $y=3x^3-6x^2+9x+11$ This has derivative $\frac{\mathrm{d}y}{\mathrm{d}x}=9x^2-12x+9$, and second derivative $\frac{\mathrm{d}^2y}{\mathrm{d}x^2}=18x-12$. To find the stationary points, we set $\frac{\mathrm{d}y}{\mathrm{d}x}=0$ $\Rightarrow x=2\pm i\surd5$, so our turning points are imaginary. To find the points of inflection, we set $\frac{\mathrm{d}^2y}{\mathrm{d}x^2}=0$ $\Rightarrow x={2\over 3}$, so we have one real inflection point. As expected, we have one more stationary point than point of inflection. Plot the graph yourself to see what a cubic looks like when the stationary points are imaginary. 2) $y=2x^3-5x^2-4x$ To find the stationary points, we set $\frac{\mathrm{d}y}{\mathrm{d}x}=0$ $\Rightarrow \frac{\mathrm{d}y}{\mathrm{d}x}=6x^2-10x-4=0 \Rightarrow x={-1\over 3}, 2$ And to find the points of inflection, we set $\frac{\mathrm{d}^2y}{\mathrm{d}x^2}=0$ $\Rightarrow \frac{\mathrm{d}^2y}{\mathrm{d}x^2}=12x-10=0 \Rightarrow x={5\over 6}$ As expected, we have one more stationary point than point of inflection, and this time all our points are real. To determine the order of our stationary points, we calculate the second derivative at $x={-1\over 3}, 2$. $\frac{\mathrm{d}y}{\mathrm{d}x}=14$ at $x=2$ and $\frac{\mathrm{d}y}{\mathrm{d}x}=-14$ at $x=\frac{-1}{3}$. So we have a maximum followed by a minimum. General Cubics Consider the function $f(x)=ax^3+bx^2+cx+d$ This has derivatives $\frac{\mathrm{d}y}{\mathrm{d}x}=3ax^2+2bx+c$ and $\frac{\mathrm{d}^2y}{\mathrm{d}x^2}=6ax+2b$. If we set $\frac{\mathrm{d}y}{\mathrm{d}x}=0$, we have at most two distinct stationary points which can be found using the quadratic formula. And we have one point of inflection, which can be found by setting $\frac{\mathrm{d}^2y}{\mathrm{d}x^2}=0 \Rightarrow x= \frac{-b}{3a}$ Now if $a> 0$ then for large negative values of x the function will have negative values. This means the first turning point will have to be a maximum (draw a few curves if you are not convinced). Thus the curve has one point of inflection which is in between maximum and minimum points (not necessarily real), the order of which is determined by the value of a. Similarly if $a< 0$ then for large negative values of x the function will have positive values, and so the first turning point will be a minimum. We leave you to look at further patterns for polynomials of order 4 and greater! Teachers' Resources Why do this problem? This problem allows for straightforward algebraic proof, a simpler, but more sophisticated, proof using geometrical reasoning or a combination of the two. The problem involves basic ideas in calculus and will naturally fit into a scheme of work for calculus. Possible approach The problem can be used as students are beginning to learn the terminology of turning points and differentiation of polynomials. If they have not yet encountered points of inflection you can simply define these as places where the tangent crosses the curve, which corresponds algebraically to the second derivative changing sign. Ask that students neatly draw axes with rules before starting their sketches. After a few minutes exploration you might suggest that students try to sketch examples of curves of the form M, m, I, MIm, IMI, mIMm, ImIMI, with I, M and m denoting points of inflection, local maxima and local minima respectively. More advanced students will wish to include asymptotes and other interesting curves. This should be encouraged, but restrict the investigation to smooth curves. Conjectures to aim for might be of the form 'There is always a point of inflection between a maximum and a minimum' or 'There is always a turning point between two points of inflection'. Conjectures should be clearly stated. Better students might wish to improve their conjectures to forms such as 'There is always a point of inflection or an asymptote between a maximum and a minimum' of 'For polynomials there is always a point of inflection between a maximum and a minimum'. There are two levels of sophistication in the proofs. Weaker students might simply test out their conjectures on many examples. Stronger students will go for an algebraic solution whilst the most sophisticated thinkers will use geometrical reasoning to avoid any algebra. To conclude the lesson, students can refine their conjectures if necessary and share them, along with the justification or proof they have constructed. Key questions How can you spot a turning point/point of inflection geometrically (i.e. on the graph)? How can you find a turning point/point of inflection algebrically? For a cubic polynomial, what sort of polynomial is the first derivative? Second derivative? Will a cubic polynomial always have: a maximum? a minimum? a point of inflection? Possible extension Extension is included in the question. You could also ask that students see for which of the examples M, m, I, MIm, IMI, mIMm and ImIMI they can provide an algebraic example. Possible support You could simply suggest that students try to show that between a maximum and a minimum there will always be a point of inflection. They could try this out on several cubic polynomials, giving practice in differentiation and use of the formula for the solution of quadratic equations.
{"url":"https://nrich.maths.org/problems/patterns-inflection","timestamp":"2024-11-05T13:44:12Z","content_type":"text/html","content_length":"50531","record_id":"<urn:uuid:312f93ed-c165-43dc-b348-78b6eccee03c>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00398.warc.gz"}
Registered Data [00196] Recent development of mathematical geophysics • Session Time & Room : 3D (Aug.23, 15:30-17:10) @G710 • Type : Proposal of Minisymposium • Abstract : The purpose of this minisymposium is to interact with mathematicians working on geophysics with various recent topics: large time behavior of solutions, machine learning approach, flow behavior on manifolds and meteorological analysis. These each topics have long research history. However, the tendency of the recent studies seems to be a broader point of view, not only from each own research field but also from an interdisciplinary perspective. • Organizer(s) : Tsuyoshi Yoneda • Classification : 35Q86, 76U05, 35Q30, 37N10, 76D03 • Minisymposium Program : □ 00196 (1/1) : 3D @G710 [Chair: Tsuyoshi Yoneda] ☆ [01262] Global solutions for rotating MHD equations in the critical space ○ Format : Talk at Waseda University ○ Author(s) : ■ Ryo Takada (The University of Tokyo) ■ Keiji Yoneda (Kyushu University) ○ Abstract : We consider the initial value problem for the incompressible rotating magnetohydrodynamics equations in $\mathbb{R}^3$. We prove the unique existence of global solutions for large initial data in the scaling critical space $\dot{H}^{\frac{1}{2}}(\mathbb{R}^3)$ when the rotation speed is sufficiently high. In order to control large magnetic fields, we introduce a modified linear solution for the velocity, and show its smallness in a suitable space-time norm by means of the dispersive effect of the Coriolis force. ☆ [02684] Multi-scale interaction of tropical weather in a simplified three-dimensional model ○ Format : Talk at Waseda University ○ Author(s) : ■ Daisuke Takasuka (University of Tokyo) ○ Abstract : In the tropics, various kinds of weather systems are spontaneously realized, as represented by mesoscale convective systems, equatorial waves, the Madden–Julian oscillation ((\rm{MJO})). They interact with each other through moist processes, wave–mean-flow interaction, and so on. As an example of this, we will present a non-linear multi-scale process in the MJO initiation, which involves the mean tropical circulations and equatorial waves, using a simplified three-dimensional fluid dynamical model. ☆ [00606] Eigenvalue Problem for Perturbation Operator of Two-jet Kolmogorov Type Flow ○ Format : Talk at Waseda University ○ Author(s) : ■ Tatsu-Hiko Miura (Hirosaki University) ○ Abstract : We consider the linear stability of the two-jet Kolmogorov type flow which is a stationary solution to the vorticity equation on the unit sphere given by the zonal spherical harmonic function of degree two. Using the mixing structure of the two-jet Kolmogorov type flow, we show that the perturbation operator does not have eigenvalues except for zero. As an application, we also prove the occurrence of the enhanced dissipation in the linearized setting. ☆ [00377] On the physics-informed neural networks approximating the primitive equations ○ Format : Online Talk on Zoom ○ Author(s) : ■ Quyuan Lin (University of California, Santa Barbara) ■ Ruimeng Hu (University of California, Santa Barbara) ■ Alan Raydan (University of California, Santa Barbara) ■ Sui Tang (University of California, Santa Barbara) ○ Abstract : Large scale dynamics of the oceans and the atmosphere are governed by the primitive equations (PEs). Due to the nonlinearity and nonlocality, the numerical study of the PEs is in general a hard task. In this talk, I will introduce physics-informed neural networks (PINNs) to tackle this challenge, and show the theoretical error estimates and the results from numerical experiments that confirm the reliability of PINNs.
{"url":"https://iciam2023.org/registered_data?id=00196","timestamp":"2024-11-04T14:58:18Z","content_type":"text/html","content_length":"67183","record_id":"<urn:uuid:ec7d3ac3-fa67-4e4c-add1-9e7fdc73185c>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00011.warc.gz"}
Transactions Online Atsushi OHTA, Kohkichi TSUJI, "Computational Complexity of Liveness Problem of Normal Petri Net" in IEICE TRANSACTIONS on Fundamentals, vol. E92-A, no. 11, pp. 2717-2722, November 2009, doi: 10.1587/ Abstract: Petri net is a powerful modeling tool for concurrent systems. Liveness, which is a problem to verify there exists no local deadlock, is one of the most important properties of Petri net to analyze. Computational complexity of liveness of a general Petri net is deterministic exponential space. Liveness is studied for subclasses of Petri nets to obtain necessary and sufficient conditions that need less computational cost. These are mainly done using a subset of places called siphons. CS-property, which denotes that every siphon has token(s) in every reachable marking, in one of key properties in liveness analysis. On the other hand, normal Petri net is a subclass of Petri net whose reachability set can be effectively calculated. This paper studies computational complexity of liveness problem of normal Petri nets. First, it is shown that liveness of a normal Petri net is equivalent to cs-property. Then we show this problem is co-NP complete by deriving a nondeterministic algorithm for non-liveness which is similar to the algorithm for liveness suggested by Howell et al. Lastly, we study structural feature of bounded Petri net where liveness and cs-property are equivalent. From this consideration, liveness problem of bounded normal Petri net is shown to be deterministic polynomial time complexity. URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.E92.A.2717/_p author={Atsushi OHTA, Kohkichi TSUJI, }, journal={IEICE TRANSACTIONS on Fundamentals}, title={Computational Complexity of Liveness Problem of Normal Petri Net}, abstract={Petri net is a powerful modeling tool for concurrent systems. Liveness, which is a problem to verify there exists no local deadlock, is one of the most important properties of Petri net to analyze. Computational complexity of liveness of a general Petri net is deterministic exponential space. Liveness is studied for subclasses of Petri nets to obtain necessary and sufficient conditions that need less computational cost. These are mainly done using a subset of places called siphons. CS-property, which denotes that every siphon has token(s) in every reachable marking, in one of key properties in liveness analysis. On the other hand, normal Petri net is a subclass of Petri net whose reachability set can be effectively calculated. This paper studies computational complexity of liveness problem of normal Petri nets. First, it is shown that liveness of a normal Petri net is equivalent to cs-property. Then we show this problem is co-NP complete by deriving a nondeterministic algorithm for non-liveness which is similar to the algorithm for liveness suggested by Howell et al. Lastly, we study structural feature of bounded Petri net where liveness and cs-property are equivalent. From this consideration, liveness problem of bounded normal Petri net is shown to be deterministic polynomial time complexity.}, TY - JOUR TI - Computational Complexity of Liveness Problem of Normal Petri Net T2 - IEICE TRANSACTIONS on Fundamentals SP - 2717 EP - 2722 AU - Atsushi OHTA AU - Kohkichi TSUJI PY - 2009 DO - 10.1587/transfun.E92.A.2717 JO - IEICE TRANSACTIONS on Fundamentals SN - 1745-1337 VL - E92-A IS - 11 JA - IEICE TRANSACTIONS on Fundamentals Y1 - November 2009 AB - Petri net is a powerful modeling tool for concurrent systems. Liveness, which is a problem to verify there exists no local deadlock, is one of the most important properties of Petri net to analyze. Computational complexity of liveness of a general Petri net is deterministic exponential space. Liveness is studied for subclasses of Petri nets to obtain necessary and sufficient conditions that need less computational cost. These are mainly done using a subset of places called siphons. CS-property, which denotes that every siphon has token(s) in every reachable marking, in one of key properties in liveness analysis. On the other hand, normal Petri net is a subclass of Petri net whose reachability set can be effectively calculated. This paper studies computational complexity of liveness problem of normal Petri nets. First, it is shown that liveness of a normal Petri net is equivalent to cs-property. Then we show this problem is co-NP complete by deriving a nondeterministic algorithm for non-liveness which is similar to the algorithm for liveness suggested by Howell et al. Lastly, we study structural feature of bounded Petri net where liveness and cs-property are equivalent. From this consideration, liveness problem of bounded normal Petri net is shown to be deterministic polynomial time complexity. ER -
{"url":"https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.E92.A.2717/_p","timestamp":"2024-11-05T16:42:25Z","content_type":"text/html","content_length":"63373","record_id":"<urn:uuid:2dfd2290-26d4-4805-bd1b-93557a563378>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00586.warc.gz"}
What type of math is NC math 2? NC MATH 2 HONORS It continues a progression of the standards established in NC Math 1. In addition to these standards, NC Math 2 Honors includes polynomials, congruence and similarity of figures, right triangle trigonometry, transformation geometry, and usage of geometric reasoning to prove theorems. Is the math 3 EOC required in North Carolina? What EOCs are given at the high school level? The EOC assessments are available for Biology, English II, NC Math 1, and NC Math 3. Students enrolled for credit in courses where EOC assessments are required must take the appropriate EOC assessment at the completion of the course. How long is the NC math 1 EOC? 150 minutes 150 minutes for most students to complete the EOC NC Math 1 and NC Math 3 Tests. What does NC math mean? NC Math II is an integrated course that builds on students’ Algebra knowledge from Math I and begins to move into topics in Geometry, Trigonometry, and Statistics. What is NC math? NC Math 1 Intervention is designed for students who have not experienced success in their past mathematics courses. Struggling students need intensive support. This program provides increased access and instructional support via blended online learning. What is taught math 2? Students in Mathematics II focus on the structure of expressions, writing equivalent expressions to clarify and reveal aspects of the quantities represented. Students create and solve equations, inequalities, and systems of equations involving exponential and quadratic expressions. Is the NC Math 3 EOC multiple-choice? The online NC Math 1 and NC Math 3 assessments contain multiple-choice items, numeric entry items, and technology-enhanced items. The paper/pencil assessment consists of multiple-choice and gridded response items. The NC Math 3 assessment contains only calculator active items. What happens if you fail the EOC but pass the class in North Carolina? What if you pass the course but fail the test? If a student passes the course, but does not earn the required minimum score on the EOC assessment, the student will retake the test. The student is not required to retake a course as a condition of retaking the test. What does NC Math 1 mean? Math 1 Course Description Math 1 is the first math course in the North Carolina High School Math Graduation Requirement Sequence. Math 1 students study linear, exponential, and quadratic functions. What is Foundations of NC math? Foundations of Math 1 offers a structured remediation solution based on the NCTM Curricular Focal Points and is designed to expedite student progress in acquiring 3rd- to 5th-grade skills. The course is appropriate for use as remediation for students in grades 6 to 12.
{"url":"https://www.farinelliandthekingbroadway.com/2022/07/19/what-type-of-math-is-nc-math-2/","timestamp":"2024-11-06T01:39:41Z","content_type":"text/html","content_length":"42712","record_id":"<urn:uuid:8330d62d-e7d2-4ef5-91b7-aa6cf9fd03d5>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00637.warc.gz"}
Quantum Mechanics 2 The course is not on the list Without time-table Code Completion Credits Range Language 02KVA2B Z,ZK 6 4+2 Czech Course guarantor: Symmetry in quantum mechanics, invariance and conservation laws, approximate methods, scattering theory, systems of identical particles Knowledge of the basic course of physics and subject 02KVAN - Quantum mechanics Syllabus of lectures: 1. Symmetry: general formalism, continuous and discrete transformations, generators. Translation, rotation. 2. Parity, time inversion. Gauge transformation, particle in an electromagnetic field. 3. Addition of angular momenta: Clebsch-Gordan coefficients , 6j-symbols, Irreducible tensor operators, Wigner-Eckart theorem. 4. Elementary theory of representations: Energy, coordinate and momentum representations, General properties of solutions of Schroedinger equation, Free particle solution, decomposition of the plane wave into partial waves. 5. Time evolution and propagators: Schroedinger, Heisenberg and Dirac pictures, Resolvent, stationary Green function, Propagator, retarded a advanced Green operator, Lippmann-Schwinger equation and perturbative solution for the evolution operator 6. Approximate methods: Variational method, helium atom. WKB method, connection formulas, tunneling. 7. Time-dependent perturbation theory, various perturbations, Fermi golden rule. Transitions between discrete levels and into continuum, particle scattered by an external field. 8. Particle in e.m. field: Pauli equation, photoeffect. 9. Introduction into scattering theory: From time-dependent to time-independent description, Wave operators, S-matrix and T-matrix, Stationary scattering states, Lippmann-Schwinger equation, scattering amplitude and cross section. 10. Born series, partial waves, phase shifts. Solutions in coordinate and momentum representations. 11. Systems of identical particles: Pauli principle, (anti)symmetrization of wave functions. One-particle basis, Slater determinants, 12. Fock space, creation and annihilation operators, one- and two-particle operators, Hartree-Fock method. Syllabus of tutorials: 1. Symmetry: general formalism, continuous and discrete transformations, generators. Translation, rotation. 2. Parity, time inversion. Gauge transformation, particle in an electromagnetic field. 3. Addition of angular momenta: Clebsch-Gordan coefficients , 6j-symbols, irreducible tensor operators, Wigner-Eckart theorem. 4. Elementary theory of representations: Energy, coordinate and momentum representations, General properties of solutions of Schroedinger equation, Free particle solution, decomposition of the plane wave into partial waves. 5. Time evolution and propagators: Schroedinger, Heisenberg and Dirac pictures, Resolvent, stationary Green function, Propagator, retarded a advanced Green operator, Lippmann-Schwinger equation and perturbative solution for the evolution operator 6. Approximate methods: Variational method, helium atom. WKB method, connection formulas, tunneling. 7. Time-dependent perturbation theory, various perturbations, Fermi golden rule. Transitions between discrete levels and into continuum, particle scattered by an external field. 8. Particle in e.m. field: Pauli equation, photoeffect. 9. Introduction into scattering theory: From time-dependent to time-independent description, Wave operators, S-matrix and T-matrix, Stationary scattering states, Lippmann-Schwinger equation, scattering amplitude and cross section. 10. Born series, partial waves, phase shifts. Solutions in coordinate and momentum representations. 11. Systems of identical particles: Pauli principle, (anti)symmetrization of wave functions. One-particle basis, Slater determinants, 12. Fock space, creation and annihilation operators, one- and two-particle operators, Hartree-Fock method. Study Objective: Advanced quantum-mechanical methods, perturbative formulation and second quantization Application of quantum description and various (in particular perturbative) methods of solution on realistic microscopic systems Study materials: Key references: [1] D.J. Griffiths: Introduction to Quantum Mechanics, Prentice Hall, 2nd edition, 2004 [2] J. Formánek: Introduction in Quentum mechanics I,II, Academia, 2004 (in Czech) Recommended references: [3] J.R. Taylor: Scattering Theory, J. Wiley and Sons, 1972 [4] E. Merzbacher: Quantum Mechanics, 3rd edition, John Wiley, 1998 Further information: No time-table has been prepared for this course The course is a part of the following study plans:
{"url":"https://bilakniha.cvut.cz/en/predmet11337005.html","timestamp":"2024-11-02T02:28:07Z","content_type":"text/html","content_length":"12136","record_id":"<urn:uuid:800d5084-efa6-481b-bbd7-82b94fa64858>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00687.warc.gz"}
Dupont, Berridge, Goldbetter, 1991 Model Status This model has been built with the differential expressions in Dupont's 1991 paper. This file is known to run in PCEnv and COR, and parameters for constants (beta) can be altered to produce all diagrams of figure 2 in the paper. The current parameterization is set to reproduce beta = 60% (0.6). The initial conditions Z = 0.52 and Y = 0.93 were found by allowing the model to run till steady state at 100 seconds. Model Structure We consider a simple, minimal model for signal-induced Ca2+ oscillations based on Ca(2+)-induced Ca2+ release. The model takes into account the existence of two pools of intracellular Ca2+, namely, one sensitive to inositol 1,4,5 trisphosphate (InsP3) whose synthesis is elicited by the stimulus, and one insensitive to InsP3. The discharge of the latter pool into the cytosol is activated by cytosolic Ca2+. Oscillations in cytosolic Ca2+ arise in this model either spontaneously or in an appropriate range of external stimulation; these oscillations do not require the concomitant, periodic variation of InsP3. The following properties of the model are reviewed and compared with experimental observations: (a) Control of the frequency of Ca2+ oscillations by the external stimulus or extracellular Ca2+; (b) correlation of latency with period of Ca2+ oscillations obtained at different levels of stimulation; (c) effect of a transient increase in InsP3; (d) phase shift and transient suppression of Ca2+ oscillations by Ca2+ pulses, and (e) propagation of Ca2+ waves. It is shown that on all these counts the model provides a simple, unified explanation for a number of experimental observations in a variety of cell types. The model based on Ca(2+)-induced Ca2+ release can be extended to incorporate variations in the level of InsP3 as well as desensitization of the InsP3 receptor; besides accounting for the phenomena described by the minimal model, the extended model might also account for the occurrence of complex Ca2+ oscillations. Schematic diagram of the cell model for signal-induced, intracellular calcium oscillations. The complete original paper reference is cited below: Signal-induced Ca2+ oscillations: properties of a model based on Ca(2+)-induced Ca2+ release, Dupont G, Berridge M.J, Goldbeter A, 1991, Cell Calcium 12, 73-85. PubMedID: 1647878
{"url":"https://models.cellml.org/exposure/060c119cc5365e3d9cd0203c82fe0121/view","timestamp":"2024-11-03T01:21:29Z","content_type":"application/xhtml+xml","content_length":"18292","record_id":"<urn:uuid:9e44214c-e4b6-422e-b358-a7a74a3d4149>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00782.warc.gz"}
MATHEMATICS > Domanda 6 Consider two points A(0,0) and B(1,1) o... | Filo Question asked by Filo student MATHEMATICS Domanda 6 Consider two points and on a Cartesian plane. The equation of the axis of the segrent is: Not the question you're searching for? + Ask your question Video solutions (2) Learn from their 1-to-1 discussion with Filo tutors. 3 mins Uploaded on: 8/18/2023 Was this solution helpful? Found 3 tutors discussing this question Discuss this question LIVE for FREE 11 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Practice more questions on Coordinate geometry View more Students who ask this question also asked View more Stuck on the question or explanation? Connect with our Mathematics tutors online and get step by step solution of this question. 231 students are taking LIVE classes Question Text MATHEMATICS Domanda 6 Consider two points and on a Cartesian plane. The equation of the axis of the segrent is: Updated On Aug 18, 2023 Topic Coordinate geometry Subject Mathematics Class Grade 12 Answer Type Video solution: 2 Upvotes 122 Avg. Video Duration 2 min
{"url":"https://askfilo.com/user-question-answers-mathematics/mathematics-domanda-6-consider-two-points-and-on-a-cartesian-35343535373539","timestamp":"2024-11-05T01:12:01Z","content_type":"text/html","content_length":"235986","record_id":"<urn:uuid:b24a118c-1842-4c6b-b9ce-d94a7c28c6e6>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00269.warc.gz"}
perplexus.info :: Just Math : Three circles on sphere The smaller the three circles are (a) relative to the sphere's radius, r, the closer it is to Euclidean space. Also, the radii involved can be expressed in various systems, so one must be decided upon. Is the radius of any circle (a or the solution circle) the arc length from its epicenter (on the surface of the sphere) to its circumference? ... or the Euclidean distance from its center on its secant plane? ... or the Euclidean distance from the epicenter to the circumference along the secant line? Posted by Charlie on 2021-01-28 06:55:14
{"url":"http://perplexus.info/show.php?pid=12220&cid=63027","timestamp":"2024-11-07T06:14:58Z","content_type":"text/html","content_length":"12483","record_id":"<urn:uuid:a1e41784-477c-411d-89de-266794cad3e3>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00232.warc.gz"}
s o Our users: What a great tool! I would recommend this software for anyone that needs help with algebra. All the procedures were so simple and easy to follow. T.P., New York It appears you have improved upon an already good program. Again my thanks, and congrats. Jacob Matheson, FL I want to thank the support staff for the invaluable help you have provided me so far. I would otherwise fail the math course I am taking without a doubt. Margaret Thomas, NY Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among Search phrases used on 2013-12-31: • evaluating exponential expression • Exponential Equation Algebra 2/Trig. Word Problems • program in c++ to find hcf of polynomial • solve radical formulas calculator • algebra scaling worksheets • statistics practice sheets with answers • linear equation with fractions calculator • how to solve a 2nd order Differential • "rationalizing the denominator" +worksheet • square root simplify calculator • 8th grade math worksheets-conversion factors • how to solve algebra 1 application of systems • powerpoint presentations on graphing linear equations • graphing calculators with absolute value online • free tool for algebra to sove GMAT papers • math trivia for college • differential equations calculators • number theory investigatory project • elementary algebra for college students 7th teacher edition • help with solving a variable by addition or subtraction • skills needed to solve 2 step equation • trivia about polynomial function • rewriting a quadratic equation in terms of y • sample problems for dividing integers • online complete square solver • iowa test pre algebra • turn decimals into fractions calculator • algebra 2 online graphing calculator • online integration solver • fractions with fractional exponents • fraction to decimal difficult problems • worksheets solving equations using two operations • free ppt on surds • make a equation into perfect square • how to rewrite as ,mixed decimals • how to calculate linear feet for a perimeter • i need help in grade 8 pre algebra • 3rd grade math work problems • algebrater • hard algebra quiz • lesson plan on introducing algebra ks3 • free download Chemistry KS3 • simplify by factoring • add and subtract equations with decimals • Solving equation by elimination calculator • where can literal equations be seen or used in everyday life? • prentice hall chemistry connection to our changing world section review answers • yr 9 maths papers • math problem derivative can only be solved with calculator • free online square root calculator • find homework solution for "mathematical statistics with application" + • online equation calculator • Sketch Graphs of Linear Equations solver • square root of x divided by 3 times square root of x minus squar root of y • interval notation for non function parabolas domain and range • Math Saxon Algebra 1 Answer key help • calculating log base • free math worksheets algorithm • free add math tutorial multiple equations • linear equation verbal problem • orleans hanna algebra • first order linear differential equation solver • 7th grade reading free work sheets • sample trivias • mcdougal littell english answers • how to convert decimals to degrees on a calculator • algebra help • algebra master software • USES of algebric expression • free worksheet 8 point compass • division of integers worksheets • read pds in ti 89 • aptitude question and answers • solved sample papers • how to program cubic root on TI-83 • free online level six sats practise papers • algebra trivias • algebra online calculator • online calculator site to write algebraic expressions • ti-89 convert polar to rectangular • free 4th grade division • fraction variable caculator • inventor of the coordinate plane • ti-84 plus programming expr( • graph of fractional exponents
{"url":"https://mathworkorange.com/math-help-calculator/trigonometry/operations-on-delta-function.html","timestamp":"2024-11-03T03:51:26Z","content_type":"text/html","content_length":"87158","record_id":"<urn:uuid:3a4e1783-4440-45d0-976e-a40147461a75>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00840.warc.gz"}
GEOGRAPHICAL TECHNIQUES 140 MCQS MOCK TEST 4 FOR NTA UGC NET - Geographer Corner GEOGRAPHICAL TECHNIQUES 140 MCQS MOCK TEST 4 FOR NTA UGC NET GEOGRAPHICAL TECHNIQUES 140 MCQS MOCK TEST 4 FOR NTA UGC NET GEOGRAPHICAL TECHNIQUES 140 MCQS Mock Test 4 NTA UGC NET, PRACTICAL GEOGRAPHY 140 MCQS,GIS & REMOTE SENSING 140 MCQS Mock Test 4,GEOGRAPHY OF INDIA 200 MCQS MOCK TEST, NTA UGC NET PYQS GEOGRAPHY, CARTOGRAPHY 140 MCQS Mock Test 4, GEOGRAPHY PYQS UGC NET, NTA UGC NET GEOGRAPHY PAPER WISE PYQS, PYQS FOR UGC NET GEOGRAPHY, GEOGRAPHICAL TECHNIQUES 140 Questions Mock Test for NTA UGC NET & States SET/SLET Exam, This is the Part 4 of this series. To attempt this Mock test, click on the below link Created on By netset corner GEOGRAPHICAL TECHNIQUE 140 MCQS PART 4 GEOGRAPHICAL TECHNIQUE 140 MCQS PART 4 • This Test Contains 20 Questions • Total Questions is 140 • All Questions is Compulsory • No Negative Marking • Specific Time given for this quiz • Quiz / Test will start after clicking on Start Button • After Completing Quiz, can see result in Percentages (%) • Attempt Carefully, so lets start, best wishes Dear Aspirant, You Need to Login first to Access 1 / 20 Q.61. The headquarter of survey of India is located at (A) New Delhi (B) Dehradun (C) Kolkata (D) Bengaluru 2 / 20 Q.62. The headquarter of Geological Survey of India is (A) Bangaluru (C) New Delhi (B) Dehradun (D) Kolkata 3 / 20 Q.63. The full form of SPSS is (A) Semi Package for the Social Sciences (B) Statistical Package for the Social Sciences (C) Standard Package for the Social Science (D) Special Package for the Social Sciences 4 / 20 Q.64. Who was the originator of Regression Analysis? (A) Clark and Evans (B) Karl Pearson (C) Spearman (D) Francis Galton 5 / 20 Q.65. The Climograph was introduced in Cartography by (A) A.H. Robinson (B) G. Taylor (C) F. Galton (D) G.H. Smith 6 / 20 Q.66. The Nearest Neighbour Analysis (NAA) method was developed by (A) G.H. Smith (B) A.H. Robison (C) Clark and Evans (D) Karl Pearson 7 / 20 Q.67. The contours of equal spacing represents (A) Undulating Slope (B) Convex Slope (C) Concave Slope (D) Uniform Slope 8 / 20 Q.68. Match List-I with List-II and select the correct answer using the codes given below List-I List-II A. One Dimensional Diagram 1. Shows Length and Width B. Two Dimensional Diagram 2. Shows Length C. Three Dimensional Diagram 3. Shows Spatial Variations of Single Phenomena or relationship between Phenomena D. Thematic Map 4. Shows length, breadth and height A B C D A. 3 4 2 1 B. 2 1 4 3 C. 4 3 1 2 D. 1 3 2 4 9 / 20 Q.69. Which one of the following terms is used to measure the extreme peak in a normal distribution? (A) Platykurtic (B) Leptokurtic (C) Mesokurtic (D) Skewed 10 / 20 Q.70. Match List-I with List-II and select the correct answer using the codes given below – List-I (R.F.) List-II (Level of Scale) A. 1: 2, 50,000 1. Cadastral B. 1 : 50, 00, 000 2. Large C. 1 : 25, 000 3. Medium D. 1 : 4,000 4. Small A B C D A. 4 3 1 2 B. 1 4 2 3 C. 3 4 2 1 D. 2 4 1 3 11 / 20 Q.71. Match List-I with List-II and select the correct answer using the codes given below List-I (Statistics) List-II (Analysis) A. Standard Distance 1. Principal Component B. Nearest Neighbour Analysis 2. Scatter Diagram C. Correlation 3. Settlement Pattern D. Eigen Value 4. Centographic Measure A B C D A. 3 4 1 2 B. 1 2 3 4 C. 4 3 2 1 D. 2 3 4 1 12 / 20 Q.72. Morphometric explanation is suitable for the study of (A) Forest Resources (B) Slope Analysis (C) Land Use Pattern (D) Soil Resources 13 / 20 Q.73. Chi-Square statistical method is mostly used for (A) Variability (B) Regression (C) Inter Relation (D) Significance of a frequency distribution 14 / 20 Q.74. Who among the following prepared Lorenze Curve for showing the inequality of income? (A) Pythagoras (B) G. Taylor (C) Dr. Max O. Lorenz (D) A.H. Robinson 15 / 20 Q.75. Which one of the following phenomenal relationship is depicted by an Ergo Graph? (A) Temperature and Absolute Humidity (B) Rainfall and Run Off (C) Crop Production and Rainfall (D) Climate and Growing Season of Crops 16 / 20 Q.76. Which one of the following indicates ‘Random’ distribution of settlements? (A) 0.49 (B) 1.00 (C) 1.59 (D) 2.15 17 / 20 Q.77. According to Nearest Neighbour Index, what would be the maximum value for the perfect uniform settlement distribution? (A) 0.00 (B) 2.15 (C) 1.55 (D) 2.89 18 / 20 Q.78. Which one of the following values of correlation coefficient (r) is not correctly matched degree of relationship? (A) + 0.99 High (B) + 0.50 Moderate (C) – 0.01 Very low (D) – 0.99 Nil 19 / 20 Q.79. Which one of the following aspects could be correctly represented in a contour map? (A) Population distribution (B) Terrain elevation (C) Rock hardness (D) Soil distribution 20 / 20 Q.80. Which one of the following correlation coefficients is mismatched with its value? (A) Perfect positive (B) Imperfect negative (C) No correlation (D) Positive significant Your score is The average score is 64% Unit IX: Geographical Techniques Sources of Geographic Information and Data (spatial and non-spatial), Types of Maps, Techniques of Map Making (Choropleth, Isarithmic, Dasymetric, Chorochromatic, Flow Maps) Data Representation on Maps (Pie diagrams, Bar diagrams and Line Graph, GIS Database (raster and vector data formats and attribute data formats). Functions of GIS (conversion, editing and analysis), Digital Elevation Model (DEM), Georeferencing (coordinate system and map projections and Datum), GIS Applications ( thematic cartography, spatial decision support system), Basics of Remote Sensing (Electromagnetic Spectrum, Sensors and Platforms, Resolution and Types, Elements of Air Photo and Satellite Image Interpretation and Photogrammetry), Types of Aerial Photographs, Digital Image Processing: Developments in Remote Sensing Technology and Big Data Sharing and its applications in Natural Resources Management in India, GPS Components (space, ground control and receiver segments) and Applications, Applications of Measures of Central Tendency, Dispersion and Inequalities, Sampling, Sampling Procedure and Hypothesis Testing (chi square test, t test, ANOVA), Time Series Analysis, Correlation and Regression Analysis, Measurement of Indices, Making Indicators Scale Free, Computation of Composite Index, Principal Component Analysis and Cluster Analysis, Morphometric Analysis: Ordering of Streams, Bifurcation Ratio, Drainage Density and Drainage Frequency, Basin Circularity Ratio and Form Factor, Profiles, Slope Analysis, Clinographic Curve, Hypsographic Curve and Altimetric Frequency Graph. Leave a Comment
{"url":"https://www.geographercorner.com/2022/03/geographical-techniques-140-mcqs-mock-test-4-for-nta-ugc-net.html","timestamp":"2024-11-09T07:26:48Z","content_type":"text/html","content_length":"247245","record_id":"<urn:uuid:a08c3968-7a25-4b93-8613-50f2fdbed5ca>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00292.warc.gz"}
adding array top combo box hi is there any way of adding an array to a combo box my code reads the lines from a file converts them to an array and is supposed to add that array to a combo box but it does not work any ideas any #include <GuiConstants.au3> #include <file.au3> Dim $aRecords opt("GUIOnEventMode", 1) ;create gui $box = GUICreate("asset tool", 400, 400) GUISetOnEvent($GUI_EVENT_CLOSE, "_Exit") GUISetIcon(@SystemDir & "\mspaint.exe", 0) $file = FileOpen("D:\Profiles\BHAMA\Desktop\phone\myfile.ini", 0) ; will have to change this ; asset inputbox GUICtrlCreateLabel("Asset Number", 300, 65, 95, 40) $input1 = GUICtrlCreateInput("", 40, 60, 250, 20); asset read command _FileReadToArray("D:\Profiles\BHAMA\Desktop\phone\myfile.ini",$aRecords); this reads the lines from a file and makes them in to an array For $x = 18 to $aRecords[0];starts from line 18 and reads till end of document $look = GUICtrlRead($aRecords[$x]);suposed to read all of the arrays GUICtrlCreateCombo ($look,40,150,250) ; creates a combo box and adds the array value While 1 Func _Exit() EndFunc ;==>_Exit $look = $look & $aRecords[$x] & "|" after the loop $look = stringtrimright($look,1) see if that helps Edited by gafrost SciTE for AutoItDirections for Submitting Standard UDFs Don't argue with an idiot; people watching may not be able to tell the difference. sorry dude not working ive put it after the loop before the loop doesnt pick up the array is there another way to read the lines of a file and display it in to a combo box For $x = 18 To $aRecords[0];starts from line 18 and reads till end of document $look = $look & $aRecords[$x] & "|" $look = StringTrimRight($look, 1) Edited by gafrost SciTE for AutoItDirections for Submitting Standard UDFs Don't argue with an idiot; people watching may not be able to tell the difference. yeah thats the way i tried it too but it comes up with an error D:\Profiles\BHAMA\My Documents\_FileReadToArray.au3(25,16) : WARNING: $look: possibly used before declaration. $look = $look & #include <GuiConstants.au3> #include <file.au3> Dim $aRecords opt("GUIOnEventMode", 1) ;create gui $box = GUICreate("asset tool", 400, 400) GUISetOnEvent($GUI_EVENT_CLOSE, "_Exit") GUISetIcon(@SystemDir & "\mspaint.exe", 0) $file = FileOpen("D:\Profiles\BHAMA\Desktop\phone\myfile.ini", 0) ; will have to change this ; asset inputbox GUICtrlCreateLabel("Asset Number", 300, 65, 95, 40) $input1 = GUICtrlCreateInput("", 40, 60, 250, 20); asset read command _FileReadToArray("D:\Profiles\BHAMA\Desktop\phone\myfile.ini",$aRecords); this reads the lines from a file and makes them in to an array For $x = 18 To $aRecords[0];starts from line 18 and reads till end of document $look = $look & $aRecords[$x] & "|" $look = StringTrimRight($look, 1) GUICtrlCreateCombo ($look,40,150,250) ; creates a combo box and adds the array value While 1 Func _Exit() EndFunc ;==>_Exit that's just a warning from the au3check, you choose to ignore it and it should work or, just add $look at the top of the script: #include <GuiConstants.au3> #include <file.au3> Dim $aRecords, $look = "" opt("GUIOnEventMode", 1) ;create gui $box = GUICreate("asset tool", 400, 400) GUISetOnEvent($GUI_EVENT_CLOSE, "_Exit") GUISetIcon(@SystemDir & "\mspaint.exe", 0) $file = FileOpen("D:\Profiles\BHAMA\Desktop\phone\myfile.ini", 0) ; will have to change this ; asset inputbox GUICtrlCreateLabel("Asset Number", 300, 65, 95, 40) $input1 = GUICtrlCreateInput("", 40, 60, 250, 20); asset read command _FileReadToArray("D:\Profiles\BHAMA\Desktop\phone\myfile.ini", $aRecords); this reads the lines from a file and makes them in to an array For $x = 18 To $aRecords[0];starts from line 18 and reads till end of document $look = $look & $aRecords[$x] & "|" $look = StringTrimRight($look, 1) GUICtrlCreateCombo($look, 40, 150, 250) ; creates a combo box and adds the array value While 1 Func _Exit() EndFunc ;==>_Exit SciTE for AutoItDirections for Submitting Standard UDFs Don't argue with an idiot; people watching may not be able to tell the difference. yay it works but with a hitch all the records have merged now and apear in one continus line in the combo box in stead of each one on a new box that i can scroll down to any ideas how i can seperate GUICtrlCreateCombo("", 40, 150, 250) ; creates a combo box GUICtrlSetData(-1,$look) ;adds the array value to combo box SciTE for AutoItDirections for Submitting Standard UDFs Don't argue with an idiot; people watching may not be able to tell the difference. sorry to be a pain but its not working this is what i have the field is now blank is there a way to split each record in to a diffrent combo box record #include <GuiConstants.au3> #include <file.au3> Dim $aRecords, $look = "" opt("GUIOnEventMode", 1) ;create gui $box = GUICreate("asset tool", 400, 400) GUISetOnEvent($GUI_EVENT_CLOSE, "_Exit") GUISetIcon(@SystemDir & "\mspaint.exe", 0) $file = FileOpen("D:\Profiles\BHAMA\Desktop\phone\myfile.ini", 0) ; will have to change this ; asset inputbox GUICtrlCreateLabel("Asset Number", 300, 65, 95, 40) $input1 = GUICtrlCreateInput("", 40, 60, 250, 20); asset read command _FileReadToArray("D:\Profiles\BHAMA\Desktop\phone\myfile.ini", $aRecords); this reads the lines from a file and makes them in to an array For $x = 18 To $aRecords[0];starts from line 18 and reads till end of document $look = $look & $aRecords[$x]&'|' $look = StringTrimRight($look, 3) GUICtrlCreateCombo("", 40, 150, 250) ; creates a combo box GUICtrlSetData(-1,$look) ;adds the array value to combo box While 1 Func _Exit() EndFunc ;==>_Exit GUICtrlCreateCombo("", 40, 150, 250) ; creates a combo box GUICtrlSetData(-1,$look) ;adds the array value to combo box I'm trying to load a list or combo box directly from an array. Didn't see this in the helpfile but I tried your sample above. However it doesn't work. Am I missing something ? Thanks for any hints; Attached my sample: #include <GuiConstants.au3> #Include <File.au3> #Include <Array.au3> ; GUI GuiCreate("testing", 1024, 768) WinSetState("testing", "", @SW_MAXIMIZE) ; Select and Read a directory $myfoldervar = FileSelectFolder("Choose a folder.", "") ; Show it in a Combo GUICtrlCreateCombo("", 40, 150, 250) ; create combo GUICtrlSetData(-1,$FileList) ;adds the array value to combo box ??? ; GUI MESSAGE LOOP While GuiGetMsg() <> $GUI_EVENT_CLOSE @anwarbham i checked my code with an ini i had and it worked fine, noticed you changed the trim from 1 to 3 only need to trim 1 #include <GuiConstants.au3> #Include <File.au3> #Include <Array.au3> ; GUI GUICreate("testing", 1024, 768) WinSetState("testing", "", @SW_MAXIMIZE) ; Select and Read a directory $myfoldervar = FileSelectFolder("Choose a folder.", "") $FileList = _FileListToArray ($myfoldervar) $s_text = "" For $x = 1 To $FileList[0] $s_text = $s_text & $FileList[$x] & "|" $s_text = StringTrimRight($s_text, 1) ; Show it in a Combo GUICtrlCreateCombo("", 40, 150, 250) ; create combo GUICtrlSetData(-1, $s_text) ;adds the array value to combo box ??? ; GUI MESSAGE LOOP While GUIGetMsg() <> $GUI_EVENT_CLOSE SciTE for AutoItDirections for Submitting Standard UDFs Don't argue with an idiot; people watching may not be able to tell the difference. GUICtrlCreateCombo("", 40, 150, 250) ; creates a combo box GUICtrlSetData(-1,$look) ;adds the array value to combo box I'm having a similar problem... The code is: $NamesBox = GUICtrlCreateCombo("", 112, 8, 145, 21) GUICtrlSetData($NamesBox, $sNameList) The MsgBox shows "Gene|Kent|Charlotte|Sharon|Dwayne|Scott", but the ComboBox is not populated. I tried first with: $NamesBox = GUICtrlCreateCombo("", 112, 8, 145, 21) GUICtrlSetData(-1, $sNameList) The MsgBox showed "Gene|Kent|Charlotte|Sharon|Dwayne|Scott", but that didn't work either. From my understanding of the Help file, either one of those should work. I'm using the .81 beta. Any suggestions? Edited by Gene [font="Verdana"]Thanks for the response.Gene[/font]Yes, I know the punctuation is not right... This works for me #include <GUIConstants.au3> $sNameList = "Gene|Kent|Charlotte|Sharon|Dwayne|Scott" GUICreate("My GUI combo") ; will create a dialog box that when displayed is centered $NamesBox = GUICtrlCreateCombo("", 112, 8, 145, 21) GUICtrlSetData($NamesBox, $sNameList) GUISetState () ; Run the GUI until the dialog is closed While 1 $msg = GUIGetMsg() If $msg = $GUI_EVENT_CLOSE Then ExitLoop SciTE for AutoItDirections for Submitting Standard UDFs Don't argue with an idiot; people watching may not be able to tell the difference. This works for me #include <GUIConstants.au3> $sNameList = "Gene|Kent|Charlotte|Sharon|Dwayne|Scott" GUICreate("My GUI combo"); will create a dialog box that when displayed is centered $NamesBox = GUICtrlCreateCombo("", 112, 8, 145, 21) GUICtrlSetData($NamesBox, $sNameList) GUISetState () ; Run the GUI until the dialog is closed While 1 $msg = GUIGetMsg() If $msg = $GUI_EVENT_CLOSE Then ExitLoop Thanks for the reply. This is a weird situation, what you sent doesn't work and neither does the help file sample for _GUICtrlComboAddString(). Oh, and between my previous note and this, I'm now running the .83 beta. I'm running Win2K with sp4 & 256 MB RAM. I modified what you sent to make sure of what beta level was running it. #include <GUIConstants.au3> $sNameList = "Gene|Kent|Charlotte|Sharon|Dwayne|Scott" GUICreate("My GUI combo"); will create a dialog box that when displayed is centered $NamesBox = GUICtrlCreateCombo("", 112, 8, 145, 21) GUICtrlSetData($NamesBox, $sNameList) GUISetState () ; Run the GUI until the dialog is closed While 1 $msg = GUIGetMsg() If $msg = $GUI_EVENT_CLOSE Then ExitLoop And the Help file sample code. #include <GuiConstants.au3> #include <GuiCombo.au3> Dim $Label,$Input,$Btn_Add,$Combo,$Btn_Exit,$msg GuiCreate("ComboBox Add String", 392, 254) $Label = GuiCtrlCreateLabel("Enter String to Add", 20, 20, 120, 20) $Input = GuiCtrlCreateInput("", 160, 20, 180, 20) $Btn_Add = GuiCtrlCreateButton("Add String", 210, 50, 90, 30) $Combo = GuiCtrlCreateCombo("A", 70, 100, 270, 21) $Btn_Exit = GuiCtrlCreateButton("Exit", 150, 180, 90, 30) While 1 $msg = GuiGetMsg() Case $msg = $GUI_EVENT_CLOSE Or $msg = $Btn_Exit Case $msg = $Btn_Add If(StringLen(GUICtrlRead($Input)) > 0) Then Of the Help file sample code, all that appears in the ComboBox is the "A". EDIT = I have tried all attempts as interpreted and as compiled. Edited by Gene [font="Verdana"]Thanks for the response.Gene[/font]Yes, I know the punctuation is not right... ah, win2k, just change the height of the combobox to 100 or more, should fix your problem SciTE for AutoItDirections for Submitting Standard UDFs Don't argue with an idiot; people watching may not be able to tell the difference.
{"url":"https://www.autoitscript.com/forum/topic/16509-adding-array-top-combo-box/","timestamp":"2024-11-03T16:53:17Z","content_type":"text/html","content_length":"255351","record_id":"<urn:uuid:dc336cf7-9678-4460-8eee-c686a8ccb420>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00809.warc.gz"}
Vehicle steering equations I have a vehicle model driving on the highway which accelerates to max speed. I would like to control vehicle steering angle as well but could not finalize equations and objective function? And also I am not sure how to get two outputs at the same time from the objective function? import cvxpy as cvx import numpy as np import matplotlib.pyplot as plt import math import time N = 24 # time steps to look ahead path = cvx.Variable((N, 2)) # initialize the y pos and y velocity flap = cvx.Variable(N-1, boolean=True) # initialize the inputs, whether or not the bird should flap in each step acc = cvx.Variable(N-1, boolean=True) angle_list=cvx.Variable(N-1, boolean=True) last_solution = [False, False, False] # seed last solution last_path = [(0,0),(0,0)] # seed last path PIPEGAPSIZE = 100 # gap between upper and lower pipe PIPEWIDTH = 34 BIRDWIDTH = 34 BIRDHEIGHT = 16 BIRDDIAMETER = np.sqrt(BIRDHEIGHT**2 + BIRDWIDTH**2) # the bird rotates in the game, so we use it's maximum extent SKY = 50 # location of sky GROUND =120 #(150*0.79)-1 # location of ground PLAYERX = 0#180 # location of bird def getPipeConstraintsDistance(x, y, upperPipes): constraints = [] # init pipe constraint list pipe_dist = 0 # init dist from pipe center for pipe in upperPipes: dist_from_front = pipe['x'] - x - BIRDDIAMETER dist_from_back = pipe['x'] - x + PIPEWIDTH if (dist_from_front < 0) and (dist_from_back > 0): constraints += [y <= (pipe['y'] - BIRDDIAMETER)] # y above lower pipe constraints += [y >= (pipe['y'] - PIPEGAPSIZE)] # y below upper pipe pipe_dist += cvx.abs(pipe['y'] - (PIPEGAPSIZE//2) - (BIRDDIAMETER//2) - y) # add distance from center return constraints, pipe_dist def solve(playerx,playery,playerVel,angle): playerAcc = 0.1# 0.1 # players accleration playerFlapAcc = -1 # players speed on flapping # unpack path variables y = path[:,0] vy = path[:,1] c = [] # init constraint list #print('ilk cccc',c) c += [y <= GROUND, y >= SKY] # constraints for sky and ground #print('2. cccc',c) c += [y[0] == playery, vy[0] == playerVel] # initial conditions obj = 0 x = playerx xs = [x] # init x list for t in range(N-1): # look ahead dt = t//15 + 1 # let time get coarser further in the look ahead #x += playerVel*math.cos(angle)*angle_list[t]*dt # update x position xs += [x] # add to list #c += [vy[t + 1] == vy[t] + playerVel*math.sin(angle)*angle_list[t]*dt ]# playerFlapAcc * flap[t] ,+ playerAccY * dt # add y velocity constraint, f=ma #c += [y[t + 1] == y[t] + vy[t + 1]*dt ] # add y constraint, dy/dt = a #pipe_c, dist = getPipeConstraintsDistance(x, y[t+1], upperPipes) # add pipe constraints #c += pipe_c #c += [playerVel<=100] #obj += dist #objective = cvx.Minimize(cvx.sum(flap) + 0.5* cvx.sum(cvx.abs(vy))) # minimize total flaps and y velocity objective = cvx.Maximize(playerVel)#+cvx.Minimize(cvx.sum(angle_list)) #objective = cvx.Minimize(cvx.sum(cvx.abs(vy)) + 100* obj) prob = cvx.Problem(objective, c) # init the problem #prob.solve(verbose = False) # use this line for open source solvers prob.solve(verbose = False, solver="GUROBI") # use this line if you have access to Gurobi, a faster solver #y.value is output of prob.solve check cvxpy documentation for more last_path = list(zip(xs, y.value)) # store the path last_solution = np.round(acc.value).astype(bool) # store the solution #print('last solution',acc.value) #print('last solution',np.round(acc.value)) #print('last solution0000',last_solution[0]) #print('last sol sonrasi') return last_solution[0], last_path # return the next input and path for plotting last_solution = last_solution[1:] # if we didn't get a solution this round, use the last solution last_path = [((x-4), y) for (x,y) in last_path[1:]] return last_solution[0], last_path return False, [(0,0), (0,0)] # if we fail to solve many times in a row, do nothing • Hi Mustafa, could you please specify what is the difficulty you are dealing with? What is the model you are trying to implement? Which two outputs are you trying to extract from your objective function? Best regards • The main difficulty, I can set a boolean variable for acceleration acc = cvx.Variable(N-1, boolean=True) which true accelerates false don't. But for steering angle how I can model turn left, turn right or do nothing. How should I define it? The main model works like the below: if acc: #output from Gurobi( True or False) moved = True # if angle_left: # player_car.rotate(left=True) # if angle_right: # player_car.rotate(right=True) So when writing vehicle equations should I consider both options in the equation? for t in range(N-1): # look ahead dt = t//15 + 1 # let time get coarser further in the look ahead x += playerVel*math.cos(angle)*angle_left[t]*dt + playerVel*math.cos(angle)*angle_right[t]*dt # update x position y += playerVel*math.sin(angle)*angle_left[t]*dt + playerVel*math.sin(angle)*angle_right[t]*dt # update x position • Hi Mustafa, Could you please share a formulation of the mathematical model (LP, MIP, ...) you are trying to solve? Rather than code, a mathematical formulation would be useful. An angle, in which a vehicle should move, could be modeled, for example, by a continuous or integer variable in the range \([0,180]\). But without understanding the full model I can't give you any further guidance. Best regards • Hello Jonasz, I added vehicle equations below. Thanks for the help. • If my understanding of your problem is correct, these equations are used to update the position of a car. I also infer that you want Gurobi to tell you the acceleration and vehicle's angle. What we need to know is 1. What is the overall aim of this code, i.e. what is the objective? What should be optimized in each step? 2. Which constraints should be taken into account when deciding about the acceleration and angle in each step? Best regards • I have a red car below and would like to reach the finish line. Objective function= Minimize(x_car-X_finish) For constraints: □ Road boundaries, the top and bottom red lines on-road □ green vehicles. • def solve(playerx,playery,playervel_hiz,angle,computercar_x,computercar_y): playerAcc = 0.1# 0.1 # players accleration # unpack path variables y = path[:,0] vy = path[:,1] c = [] # init constraint list #print('ilk cccc',c) #c += [y <= RightRoadBorder, y >= LeftRoadBorder] # constraints for highway boundaries #c += [y <= secondLaneBorder] # constraints for highway boundaries #c += [y[0] == playery, vy[0] == playerVel] # initial conditions vx[0]==playerVel,x[0]==playerx, c += [x[0] == playerx, playerVel[0] == playervel_hiz] # initial conditions vx[0]==playerVel,x[0]==playerx, obj = 0 #y = playery #ys = [y] # init x list for t in range(N-1): # look ahead dt = t//15 + 1 # let time get coarser further in the look ahead c += [playerVel[t+1] == playerVel[t] + playerAcc*acc[t]*dt] c += [y[t + 1] == y[t] + playerVel[t]*math.sin(angle)*dt ] # add y constraint, dy/dt = a c += [x[t + 1] == x[t] + playerVel[t]*math.cos(angle)*dt ] # add y constraint, dy/dt = a #vehicle_c, dist = getVehicleConstraintsDistance(x, y[t+1], computercar_x,computercar_y) # add pipe constraints #c += vehicle_c #c += [playerVel<=100] #obj += dist objective = cvx.Maximize( cvx.sum(acc))#cvx.Maximize(playerVel)#+cvx.Minimize(cvx.sum(angle_list)) prob = cvx.Problem(objective, c) Now the model is like this, how I should add angle to the objective or how I should get output from it? • Hi Mustafa, I am afraid I still have too little information to help you. You could try writing down the exact mathematical model to be solved with each iteration, before updating the values for v, x, y. We might be then able to help you. You could also look for inspiration e.g. here. Maybe their approaches could serve as inspiration. Also, to query the solution via cvxpy you could first optimize and - assuming a solution is found - iterate over your variables and query their value parameter. You can find more information here Best regards • Hello Jonasz, I have shared mathematical equations in the previous post. I think you are asking x_dot=Ax +bu form of equations but do I need this? Because currently, my model works for acceleration with the current setup. I am not sure how to get output for the steering angle! • Hi Mustafa, Some more questions and thoughts: 1) Why do you model angles as binary variables? angle_list=cvx.Variable(N-1, boolean=True) Don't they have to take some other value, for example between 0 and 180 degrees or 0 and 2*pi radians? 2) Gurobi allows for the inclusion of trigonometric functions (for example, sine). To be exact, a piecewise linearization of the function is added, but you can control its granularity. I am no expert in cvxpy, and hence can't tell you how to access this Gurobi-specific constructor from cvxpy. 3) Once you include variables for angle and the appropriate constraints (as discussed in points 1) and 2)) you should be able to optimize and then query each variable and its value (assuming the optimization is successful). You can find an example in one of my posts above. Hope this helps. Best regards • 1) Why do you model angles as binary variables? left_angle=cvx.Variable(N-1, boolean=True) right_angle=cvx.Variable(N-1, boolean=True) My main vehicle model changes angle 1 by one so I was thinking to get True or False for angle and if the left angle is true rotate one degree left or if the right angle is true, rotate one degree if Acc: moved = True if left_angle: if right_angle: c += [playerVel[t+1] == playerVel[t] + playerAcc*acc[t]*dt] c += [angle == angle + 1*left_angle[t] - 1*right_angle[t] ] c += [y[t + 1] == y[t] + playerVel[t]*math.sin(angle)*dt ] # add y constraint, dy/dt = a c += [x[t + 1] == x[t] + playerVel[t]]*math.cos(angle)*dt ] # add y constraint, dy/dt = a Please sign in to leave a comment.
{"url":"https://support.gurobi.com/hc/en-us/community/posts/5554264571153-Vehicle-steering-equations","timestamp":"2024-11-11T01:38:25Z","content_type":"text/html","content_length":"97840","record_id":"<urn:uuid:ab57a45d-f800-4c14-a3ed-9990d189b115>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00016.warc.gz"}
A/C Repair Yields Over 31 Percent: Financial calculators aren’t normally a hot topic, even in real estate circles, but I took the opportunity to attend an information session on the subject sometime ago, and I haven’t regretted it. The instructor’s name was Gary Johnston, and the night of the session, he forever changed the way I look at investments by introducing me to the financial calculator. Consequently, I went on to attend his three day-seminar called “Financial Principles”. I know what you’re thinking: “Three days of running numbers through a calculator?” Trust me, once you go, you’ll understand. It’s not about crunching numbers and figuring out mortgage payments. It’s about learning to ask “if I spend my money on this, will it grow? If it does, how much will it grow, and how fast?” Let’s look at a real deal, and I’ll show you how to do some calculations. I recommended using an app called the “10bii Financial Calculator.” It’s available for iPhone and Android. (Note: Visit our blog on this one. I’ll put up screen shots of the calculator so you can see what it look’s like filled out.) At the top of your calculator, you’ll see five buttons: N, I/YR, PV, PMT, and FV. Each button stands for a piece of information you’ll need to do a calculation. If you know four pieces of the information, you can find the fifth. Here’s what each button stands for: N: number of payments (in months),,, I/YR: interest/ yield,,, PV: present value of the loan or investment,,, PMT: payments and FV: future value. For most real estate calculations, FV is zero; that’s because most mortgages will be paid off in the future. Now we have one piece of information for our example. Let’s go find the others. The deal I want to look at is a single wide we bought on an acre of land for $15,000. That means the present value is negative $15,000. It’s negative because it’s money that we paid out. (In the financial calculator world, money you pay out is negative while money paid to you is positive. If you get that wrong in PMT and PV, the calculator will give you an error). That house rents for $425 a month. Those are payments made to us which makes PMT a positive number. Let’s look and see how well this investment performs over eight years. We have N, PV, PMT and FV. We want to find I/YR. The values look like this: N=96 (8 years); I/YR= ?; PV= -15,000; PMT= +425; FV=0.00 I/YR equals 31.08 percent! Now you’re thinking “That’s an awesome return, but I thought we were going to talk about A/Cs.” Here’s how they factor in: when we got back from Gary’s course, we were rehabbing the previously-described deal. The house had no A/C, so I called my repairman to ask how much central would cost. He said he could do it for $1,500. I didn’t want to spend that. But with central A/C, we could get an extra $50 a month in rent. Armed with the knowledge Gary gave us, we can put it into the calculator. PV is the cost of the A/C, PMT is the extra rent and FV is zero. To find I/YR, We’ll need to assign a value to N. It’s reasonable to assume you can get eight years out of a new A/C with no repairs. So: N=96 (8 yrs); I/YR= ?; PV= -1,500; PMT= +50; FV=0.00 So I/YR equals 37.99 percent! Suppose the A/C went bad in five years? N=60 (5 yrs); I/YR= ?; PV= -1,500; PMT= +50; FV=0.00 Even then, this deal still yields 31.58 percent! And we just demonstrated how a repair can be perceived as a deal. That’s what Gary can do for you. He can teach you to see deals in places you never would’ve looked before. To see his seminar schedule, check out GaryJohnston.com. Joe and Ashley English invest in real estate in Northwest Georgia. For more information or to ask a question, go to www.cashflowwithjoe.com
{"url":"https://cashflowwithjoe.com/ac-repair-yields-31-percent/","timestamp":"2024-11-14T23:33:33Z","content_type":"text/html","content_length":"236620","record_id":"<urn:uuid:ce7ef46e-9168-4db3-a336-0df53c2a0b35>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00050.warc.gz"}
How to use SYD function in VBA? - Codky The SYD function in Excel VBA (Visual Basic for Applications) is used to calculate the depreciation of an asset for a specified period using the Sum-of-Years’ Digits depreciation method. This is not a built-in function in VBA itself but can be used by accessing Excel’s worksheet functions from VBA. Here’s how to use it in VBA Function CalculateSYD(Cost As Double, Salvage As Double, Life As Integer, Period As Integer) As Double CalculateSYD = Application.WorksheetFunction.SYD(Cost, Salvage, Life, Period) End Function This custom VBA function, CalculateSYD, takes the cost of the asset, the salvage value at the end of the asset’s life, the life of the asset in periods (usually years), and the period for which you want to calculate depreciation. To use the CalculateSYD function, you would call it from another VBA subroutine or function: Sub UseSYDFunction() Dim initialCost As Double Dim salvageValue As Double Dim assetLife As Integer Dim depreciationPeriod As Integer Dim depreciationAmount As Double ' Example values initialCost = 10000 ' The initial cost of the asset salvageValue = 1000 ' The salvage value of the asset at the end of its life assetLife = 5 ' The useful life of the asset in years depreciationPeriod = 1 ' The period for which to calculate depreciation ' Calculate depreciation depreciationAmount = CalculateSYD(initialCost, salvageValue, assetLife, depreciationPeriod) ' Output result Debug.Print "Depreciation for period " & depreciationPeriod & ": " & depreciationAmount End Sub Remember to adjust the initialCost, salvageValue, assetLife, and depreciationPeriod with the actual values for your specific case. Please note that VBA does not check the type of variables being passed like strongly typed languages. If you mismatch the input parameters, for example, passing a string instead of an integer, it may lead to errors or incorrect results. Always make sure you pass the right parameters to functions, such as CalculateSYD in this case.
{"url":"https://codky.com/how-to-use-syd-function-in-vba/","timestamp":"2024-11-03T20:04:19Z","content_type":"text/html","content_length":"192497","record_id":"<urn:uuid:8c18678b-c2ad-4afb-a388-5a4042f463bc>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00085.warc.gz"}
What is: Asymptotic Distribution What is Asymptotic Distribution? Asymptotic distribution refers to the behavior of a sequence of probability distributions as the sample size approaches infinity. In statistical theory, it is crucial for understanding how estimators behave in large samples. Specifically, it describes the limiting distribution of a statistic, which can be particularly useful when exact distributions are difficult to derive. Asymptotic distributions often simplify the analysis of complex statistical problems, allowing researchers to make inferences about population parameters based on sample data. Importance of Asymptotic Distribution in Statistics The significance of asymptotic distribution lies in its application to various statistical methods, including hypothesis testing and confidence interval estimation. When dealing with large sample sizes, many statistical estimators, such as the sample mean or sample variance, can be approximated by normal distributions due to the Central Limit Theorem (CLT). This property allows statisticians to apply normal distribution techniques to make inferences about population parameters, even when the underlying population distribution is not normal. Understanding asymptotic behavior is essential for developing robust statistical models and ensuring the validity of inferential statistics. Central Limit Theorem and Asymptotic Distribution The Central Limit Theorem is a cornerstone of probability theory that establishes the foundation for asymptotic distributions. It states that, given a sufficiently large sample size, the sampling distribution of the sample mean will approach a normal distribution, regardless of the shape of the population distribution. This theorem is pivotal in justifying the use of normal approximations in statistical inference. As a result, asymptotic distributions derived from the CLT are widely used in various fields, including economics, psychology, and the natural sciences, where large datasets are common. Types of Asymptotic Distributions Several types of asymptotic distributions are commonly encountered in statistical analysis. The most notable include the normal distribution, chi-squared distribution, t-distribution, and F-distribution. Each of these distributions has specific applications and properties that make them suitable for different statistical scenarios. For example, the normal distribution is often used for estimating population means, while the chi-squared distribution is utilized in tests of independence and goodness-of-fit. Understanding the characteristics of these distributions is essential for selecting the appropriate statistical methods for data analysis. Applications of Asymptotic Distribution in Data Science In data science, asymptotic distributions play a vital role in model evaluation and validation. As data scientists often work with large datasets, the principles of asymptotic behavior allow them to make reliable predictions and inferences. For instance, when developing machine learning models, understanding the asymptotic properties of estimators can help in assessing the model’s performance and generalizability. Additionally, asymptotic distributions are used in techniques such as bootstrapping and cross-validation, which are essential for estimating the accuracy of predictive models. Limitations of Asymptotic Distribution Despite its usefulness, asymptotic distribution has limitations that researchers must consider. One major limitation is that the asymptotic properties may not hold for small sample sizes. In such cases, the approximations provided by asymptotic distributions can lead to inaccurate conclusions. Furthermore, the convergence to the asymptotic distribution can be slow, meaning that for practical applications, a large sample size may be required to achieve reliable results. Therefore, it is crucial for statisticians and data scientists to be aware of these limitations when applying asymptotic Asymptotic Distribution in Hypothesis Testing Asymptotic distributions are integral to hypothesis testing, where they provide the basis for determining critical values and p-values. In many statistical tests, such as the z-test and t-test, the test statistics are derived from sample data and compared against their asymptotic distributions to make decisions about the null hypothesis. For example, in a z-test, as the sample size increases, the distribution of the test statistic approaches a standard normal distribution, allowing researchers to use z-scores to assess significance levels. This connection between asymptotic distributions and hypothesis testing is fundamental to the field of inferential statistics. Asymptotic Distribution and Estimation Theory In estimation theory, asymptotic distributions are used to evaluate the properties of estimators, such as consistency and efficiency. An estimator is said to be consistent if it converges in probability to the true parameter value as the sample size increases. Asymptotic normality is a desirable property for estimators, as it allows for the construction of confidence intervals and hypothesis tests. By analyzing the asymptotic distribution of an estimator, researchers can derive important insights into its performance and reliability, ultimately guiding the choice of estimation methods in statistical practice. Conclusion on Asymptotic Distribution Asymptotic distribution is a fundamental concept in statistics, data analysis, and data science, providing a framework for understanding the behavior of estimators and test statistics in large samples. Its applications span various fields, making it an essential tool for researchers and practitioners alike. By leveraging the principles of asymptotic behavior, statisticians can make informed inferences about population parameters, validate models, and ensure the robustness of their analyses. Understanding asymptotic distributions is crucial for anyone working with statistical data, as it underpins many of the methodologies used in modern data science.
{"url":"https://statisticseasily.com/glossario/what-is-asymptotic-distribution/","timestamp":"2024-11-06T11:44:22Z","content_type":"text/html","content_length":"139796","record_id":"<urn:uuid:1d4be583-4d24-4870-a196-06d8ab1ad637>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00401.warc.gz"}
Topology Optimization Topology optimization is a mathematical method which spatially optimizes the distribution of material within a defined domain, by fulfilling given constraints previously established and minimizing a predefined cost function. The conventional topology optimization formulation uses a finite element method (FEM) to evaluate the design performance. Topology optimization has a wide range of applications in aerospace, mechanical, bio-chemical and civil engineering. Currently, engineers mostly use topology optimization at the concept level of a design process. Due to the free forms that naturally occur, the result is often difficult to manufacture. For that reason the result emerging from topology optimization is often fine-tuned for manufacturability. Solving topology optimization problems in a discrete sense is done by discretizing the design domain into finite elements. The material densities inside these elements are then treated as the problem variables. In this case material density of one indicates the presence of material, while zero indicates an absence of The earlier stated complexities with solving topology optimization problems using binary variables has caused the community to search for other options. Topology optimization has been used by mechanical and civil engineers for many years, for example in order to minimize the amount of used material and the strain energy of structures while maintaining their mechanical strength. The traditional solutions for structural optimization problems in buildings were determined by the use of direct search methods on an Isotropic Solid and Empty (ISE) topology. Text Source: Topology optimization / wikipedia Topology Optimization / sciencedirect Video Source: 1- MX3D Printed Bridge Update 2018/ Youtube/ MX3D 2- MX3D Bridge Placement, Amsterdam 2021 / Youtube / MX3D 3- World’s first 3D printed STEEL bridge | MX3D / Youtube / Belindar Carr 4- Producing the world’s first 3D-printed bridge with robots “is just the beginning” – Joris Laarman / Youtube / Dezeen 5- Ameba Topology Optimization Software Based on Grasshopper / Youtube / Ameba 6- 3D concrete printing of a topology-optimized bridge/ Youtube/ Concrete3DLab Ghent 7- Topology-optimized concrete bridge/ Youtube/ Concrete3DLab Ghent 8- Topology optimisation of a bridge / Youtube / Jordan Burgess 9- Topology optimization for additive manufacturing Part 1/4/ Youtube/ Jun Wu 10- Topology optimization for additive manufacturing Part 4/4/ Youtube/ Jun Wu 11- 3F3D – Form Follows Force with 3D Printing / Youtube / Bayu Prayudhi Image Source : [1]- 3D‐Printed Stay‐in‐Place Formwork for Topologically Optimized Concrete Slabs / ResearchGate / Andrei Jipa, Mathias Bernhard,Mania Meibodi , Benjamin Dillenburger [2]- New Opportunities to Optimize Structural Designs in Metal by Using Additive Manufacturing / ResearchGate / Salomé Galjaard Sander Hofman Shibo Ren [3]- Topology Optimization is not Generative Design / Autodesk [4]- ”Melonia” shoes – designed by Naim Josefi / materialise [5]- A lightweight topology optimized motorcycle frame manufactured using metal 3D printing / formlabs [6]- TO flow / 3DPrint.com [7]- The bracket geometry with eight load cases / comsol.com Click here to cancel reply. You must be logged in to post a comment.
{"url":"https://parametrichouse.com/topology-optimization2/","timestamp":"2024-11-14T08:47:51Z","content_type":"text/html","content_length":"165223","record_id":"<urn:uuid:5c0de779-24fb-45cd-a958-372e7929caaa>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00312.warc.gz"}
NCERT Solutions for Class 8 Maths Chapter 5 Data Handling NCERT Solutions for Class 8 Maths Chapter 5 Data Handling Exercise 5.1 Ex 5.1 Class 8 Maths Question 1. For which of these would you use a histogram to show the data? (i) The number of letters for different areas in a postman’s bag. (ii) The height of competitors in an athletics meet. (iii) The number of cassettes produced by 5 companies. (iv) The number of passengers boarding trains from 7 a.m to 7 p.m at a station. Give a reason for each. (i) The number of areas cannot be represented in class-intervals. So, we cannot use the histogram to show the data. (ii) Height of competitors can be divided into intervals. So, we can use histogram here. For example: (iii) Companies cannot be divided into intervals. So, we cannot use histogram here. (iv) Time for boarding the train can be divided into intervals. So, we can use histogram here. For example: Ex 5.1 Class 8 Maths Question 2. The shoppers who come to a departmental store are marked as: man (M), woman (W), boy (B) or girl (G). The following list gives the shoppers who came during the first hour in the morning. W W W G B W W M G G M M W W W W G B M W B G G M W W M M W W W M W B W G M W W W W G W M M W W M W G W M G W M M B G G W Make a frequency distribution table using tally marks. Draw a bar graph to illustrate it. Ex 5.1 Class 8 Maths Question 3. The weekly wages (in ₹) of 30 workers in a factory are: 830, 835, 890, 810, 835, 836, 869, 845, 898, 890, 820, 860, 832, 833, 855, 845, 804, 808, 812, 840, 885, 835, 835, 836, 878, 840, 868, 890, 806, 840 Using tally marks make a frequency table with intervals as 800-810, 810-820 and so on. Ex 5.1 Class 8 Maths Question 4. Draw a histogram for the frequency table made for the data in Question 3, and answer the following questions: (i) Which group has the maximum number of workers? (ii) How many workers earn ₹ 850 and more? (iii) How many workers earn less than ₹ 850? Refer to the frequency table of Question No. 3. (i) Group 830-840 has the maximum number of workers, i.e., 9. (ii) 10 workers earn equal and more than ₹ 850. (iii) 20 workers earn less than ₹ 850. Ex 5.1 Class 8 Maths Question 5. The number of hours for which students of a particular class watched television during holidays is shown through the given graph. Answer the following questions. (i) For how many hours did the maximum number of students watch TV? (ii) How many students watched TV for less than 4 hours? (iii) How many students spent more than 5 hours watching TV? (i) 32 is the maximum number of students who watched TV for 4 to 5 hours. (ii) 4 + 8 + 22 = 34 students watched TV for less than 4 hours. (iii) 8 + 6 = 14 students watched TV for more than 5 hours. More CBSE Class 8 Study Material
{"url":"https://www.cbselabs.com/ncert-solutions-for-class-8-maths-chapter-5-data-handling/","timestamp":"2024-11-05T01:11:37Z","content_type":"text/html","content_length":"199600","record_id":"<urn:uuid:da18df16-95fc-41d1-a914-654a730e3807>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00707.warc.gz"}
Code gems - Absolute value function Most of the time, little bits of code are straight-forward. But every machine I've worked on has little bits of code sequences that are very clever. Sometimes the code has instructions that are rarely seen and other times the instructions may be in odd ways. The older the architecture or the more complex the instruction set, the more chance you have of finding these little gems. This post is about the ABS function in the Intel x86 architecture. One of the simpler functions to code would have to be the absolute value function. Given the value X, the absolute value would be defined as int abs1( int x) if( x < 0) x= -x; Pretty simple. This would easily translate into x86 assembler like so: (Assume register eax has the value to be operated on in these examples) (I have left out the prologue and epilogue from the function which wouldn't exist if the function were in-lined) (The numbers in parens are the number of code bytes) TEST EAX,EAX ; (2) compare eax with zero JGE SKIP1 ; (2) jump if eax >= zero NEG EAX ; (2) negate eax While this is not a lot of code, it has a jump instruction which on many processors can be expensive. Some recent processors can cut down on this expense if they can predict the jump, but it would be nice to find a solution with no jumps. Let's look at a different way of doing this: int abs2( int x ) int temp= (x>>31); // assumes a 32 bit integer return (x ^ temp ) - temp; It helps to remember that the expression "x>>31" does an arithmetic right shift of x by 31 bits and that "x ^ temp" means "x exclusive-or temp". The value of temp has either the value of 0 if x is positive or 0xFFFFFFFF (or -1) if x is negative. At this point, you may want to remember that in the 2-complement notation that -x = ~x+1. So, if x is positive, temp will be 0 and x^temp will just be x and the function returns x. If x is negative, temp will be -1, x^temp would be the same at ~x so the expression is the same as ~x - (-1) or ~x+1 which is the same as -x. The abs2 function won't generate great code, but it will avoid any jumps: MOV ECX,EAX ; (2) copy x to ecx SAR ECX,1FH ; (3) sign extend ecx thru all ; of ecx ->temp XOR EAX,ECX ; (2) x ^ temp SUB EAX,ECX ; (2) (x ^ temp) - temp Not too bad, but even this can be improved upon: CDQ ; (1) extend the sign of eax ; into edx (temp) XOR EAX,EDX ; (2) exclusive-xor of temp into x SUB EAX,EDX ; (2) subtract temp from (x^temp) The CDQ instruction extends the sign of EAX into EDX. It is a 1 byte instruction that dates back to at least the 8086. If your numbers are in signed-magnitude or 1-complement form, then the sign bit just needs to be turned off. Floating point formats typically have an explicit sign bit, so the absolute function is easier on these formats. But these formats have other corner cases to worry about like two forms of zero which makes checking for equality a bit tricky. I found this implementation buried in ntdll: MOV EAX,ECX ; (2) make copy of X into EAX NEG EAX ; (2) negate X TEST ECX,ECX ; (2) what sign does X have? CMOVNS EAX,ECX ; (3) conditional move X to EAX if ; X is positive This code also has the feature that there are no jumps, but it takes more code. And even though the CMOV instructions have been around for more than 10 years on Intel processors, there seems to be some question as to whether or not they improve performance. It would be interesting to know how this code was generated. It is worth noting that there is one negative value that the absolute function with return a negative value for. This value is the most negative value e.g. 0x80000000. This could cause problems if the code assumes that values that have gone thru the absolute function are in the positive number range. No comments:
{"url":"https://contemplativeprogrammer.blogspot.com/2009/05/code-gems-absolute-value-function_27.html","timestamp":"2024-11-06T20:28:46Z","content_type":"application/xhtml+xml","content_length":"31535","record_id":"<urn:uuid:00ffbf34-a4ca-440f-9cbe-66b5c3d5b5e5>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00105.warc.gz"}
An improved Bayesian inference model for auto healing of concrete specimen due to a cyclic freeze-thaw experiment This paper presents an innovative solution for the auto healing porous structures damaged by cyclic freeze-thaw, followed by predicting the results of recovered damage due to freezing based on Bayesian inference. The additional hydration of high strength material, cured in high temperature, is applied as auto curing for the damaged micro-pore structures. Modeling of micro pore structure is prior to damage analysis. The amount of ice volume with temperature dependent surface tensions, freezing pressure and resulting deformations, and cycle and temperature dependent pore volume has been predicted and compared with available test results. By heating the selected area of specimen in frozen chamber, approximately 100 % of strength recovery has been observed after 10 days of freeze-thaw tests in the proposed nonlinear stochastic prediction models and the experimental results. 1. Introduction The most important design parameters concerning the cyclic freeze-thaw are the distribution of micro-pores and the saturation of pores, which depend on the freezing expansion pressure as a function of temperature, developed in computational programs by Cho [1]. In this study a probabilistic prediction model has been proposed for the damaged porous material by cyclic freeze-thaw, which is verified in the experiments. The previous researches regarding stochastic modeling for the deterioration of structural components and a system have the following limitations [2-4]: 1) little effort has been focused to identify future failure but past cause or result of failure, 2) important design variables, which affect significantly system response, are subjectively determined, 3) consequently, the evaluation results may have limitations for predicting future degradation with full modeling of field variables, consisting of highly correlated composite elements. In highly correlated system models, specifically given the parameters are of multiple dimensions, it is often impossible to present the marginal distribution of each parameter analytically. In this research the mentioned limitations have been reduced appreciably by modeling uncertainty. This paper is organized as follows. In second section, freeze-thaw damage and auto healing in concrete are given. In third section, Bayesian hierarchical model for correlated data is presented. Healed strength is predicted, compared with experimental results are also presented in fourth section. Finally our work of this paper is summarized in the last section. 2. Freeze-thaw damage and auto healing in concrete Hydration and micro structural information are obtained from the analysis results of DuCOM [5]. In DuCOM, for the hydration model, multi-component chemical reactions and compounds have been considered as input data. Based on solidification theory, the reactants, mainly cement, aggregate and water, produce C-S-H gel structures, as modeled schematically in a cluster at Fig. 1, showing that the solidification process of cement paste is idealized by the formation of finite age-independent structural elements called clusters. Fig. 1The schematic representation of hydration solidification The aging process of cement paste is represented by the solidification of new cluster. As hydration proceeds, number of clusters increases (Fig. 2). Fagerlund [6], Penttala [7] and Cho [8] reported that the concrete structure damaged by cyclic freeze-thaw is affected by two parametric categories, material and load parameters. Fig. 2The number of clusters increases as hydration proceeds Fig. 3Structural degradation by the cyclic freeze-thaw, with the increase of saturation and degradation of entrained air The material parameters are water to cement ratio (W/C), entrained air pores, mix ratio, hydration, and the concentration of chloride ions, shown in Fig. 3. Recently Nakarai et al. proposed an enhanced model that added a part of the moisture in the inter-hydrate pores to the free water and considered the change of adsorbed water associated with relative humidity (Model B) to explain the continuous hydration process of low W/C concrete under adiabatic temperature conditions (Fig. 4). The large temperature rise was predicted by considering the increase in the amount of free water for hydration [9]. Fig. 4Hydration process under adiabatic temperature condition in terms of auto-healing 3. Bayesian hierarchical model for correlated data 3.1. Hierarchical modeling Bayesian hierarchical modeling has the following marginal likelihood, when data $y$ have not been observed yet: $f\left(y\right)=\int f\left(y|\theta \right)f\left(\theta \right)d\theta ,$ which is the likelihood averaged over all parameter values supported by our prior beliefs, where $f\left(y\right)$ is called prior predictive distribution. The posterior predictive distribution is given by: $f\left({y}^{"}|y\right)=\int f\left({y}^{"}|\theta \right)f\left(\theta |y\right)d\theta ,$ which is the likelihood of the future data averaged over the posterior distribution $f\left(0|y\right)$ [10]. This distribution is termed as the predictive distribution since prediction is usually attempted only after observation of a set of data $y$. Future observations ${y}^{"}$ can be alternatively viewed as additional parameters under estimation. From this perspective, the joint posterior distribution is now given by $f\left(y",\theta |y\right)$. The Markov Chain Monte Carlo (MCMC) method is used to obtain this posterior distribution from which the inputed values for missing observations or future predicted data are drawn. Inference on the future observations ${y}^{"}$ can be based on the marginal posterior distribution $f\left(y"|y\right)$ by integrating out all nuisance parameters $\theta$. Hence the predictive distribution is given by resulting in (4) since past and future observables, $y$ and $\mathrm{}{y}^{"}$, are conditionally independent given the parameter vector $\theta$. 3.2. Comparison of prediction by linear and quadratic model Linear regression models are the most popular models in statistical sciences. In linear regression model, the response variable $Y$ is considered to be a continuous random variable defined in the whole set of real numbers following the normal distribution. The following equation is selected: ${Y}_{ij}={\alpha }_{i}+{\beta }_{i}\left({x}_{j}-\stackrel{-}{x}\right),$ where $\stackrel{-}{x}=$ mean value of maintenance (duration of service). Due to the absence of a parameter representing correlation between ${\alpha }_{i}$ and ${\beta }_{i}$, standardizing the ${x} _{j}$-s around their mean to reduce dependence between ${\alpha }_{i}$ and ${\beta }_{i}$ in their likelihood is done, resulting in achieved complete independence. When component functions are correlated with each other and showing nonlinearity, problems could arise to predict future values by applying linear stochastic regression model. Due to their synergic or depending as causes and resultantly growing damages, in general deterioration of an infra-structure shows inelastic behavior. The synergic effects have been evaluated in deterministic and probabilistic way, which revealed worse deterioration than linearly superposed. Therefore the following quadratic regression models are proposed, for which each dependent variable serves as the dependent variable and the other variables in the dataset serve as the independent variables: ${Y}_{ij}={\alpha }_{i}+{\beta }_{i}\left({x}_{j}-\stackrel{-}{x}\right)+{\gamma }_{i}{\left({x}_{j}-\stackrel{-}{x}\right)}^{2}.$ The model parameter estimates are then used in making random draws from the multinomial distribution for each missing response on the dependent variable in the regression. The two stochastic regression models are compared for their fitness test in the next section. 4. Healed strength predicted, compared with experimental results 4.1. Parameters of experiments The important factors for the damage by cyclic freeze-thaw are mix proportion, dimension, and cured condition. For deciding heat control and for identifying auto healing effect, the following parameters have been selected. For the mix proportion, among the water to cement ratio of 25 %, 45 % and 75 %, 25 % ratio (W1) has been used with the 0 % (A1) of entrained air pore distribution. 2 % of entrapped air pore was assumed. For the dimension and location due to the area of exposed surfaces, dimension and location of specimens in the considered structure is considered as the dimension of specimen 4×16×4 cm in hexahedron shape shown in Fig. 5. Two locations of the specimen, node number of 38 and 106 have been selected from the structure. Table 1Arrangement of heat controller to the surfaces of the specimen Surface Node Location Heat plate # Heat controller 1 38 Back side 3 3 2 38 Side 3 3 3 38, 106 Bottom/top 1 1 4 106 Side 4 4 5 106 Back side 4 4 6 106 Bottom 2 2 The contact conditions for modeled locations in Figure 5 are explained in Table 1. The proposed experimental condition is modeled using DuCOM, which is life time simulator for concrete structures, part of which is illustrated in Table 2. In the multi-component cement hydration model [9], the referential hydration heat rate and activation energy in the equation of reaction kinetics are defined in a manner that considers the temperature dependency. Mutual interactions among the reacting constituents during hydration are quantitatively formulated. The effect of free water on the hydration rate is modeled using the hard shell concept of a hydrated cluster (Figure 2). The degree of heat generation rate decline in terms of both the amount of free water and the thickness of internal hydrates layer is formulated. Fig. 5The boundary condition and locations of measurement a)) Dimensions F. E. modeled b) Enlarged nodes for modeling four exposed surface conditions (node numbers) Table 2Integrated DuCOM [5] consists of ten Fortran-90 source files MULTI-COM.F Main system control processor AGNG-model.F Aging material model of mechanics based on solidification concept and micro-pore pressure, time-dependent constitutive model of aged concrete CHLD-model.F Chloride penetration model and free-ion and bounded chloride CO2G-model.F Carbon dioxide diffusion model and ion thermo-dynamic equilibrium in pore solution, carbonation chemical reactions HEAT-model.F Cement hydration heat model for each mineral compounds and consumption of water and CaOH[2] creation model HYGR-model.F Moisture migration and equilibrated balance model in micro pore, structural formation of micro-pore MECH-model.F Constitutive model for reinforced concrete solids with multi-directional cracking, soil foundation model and interfaces OXGE-model.F Oxygen (dissolved and gaseous) migration and micro-cell based corrosion of steel dispersed in concrete CALC-model.F Calcium ion (dissolved) migration and leaching from Ca(OH)[2] and C-S-H gel (calcium silicate) ELEC-model.F Electric potential field and electron current, conductance and resistance BIOM-model.F Bio-mass and its micro-organization rate, coupled heat generation and moisture consumption and volume compaction META-model.F Heavy metal ion dissolution and diffusion from mono-sulphate 6-order chromium ion is considered The ambient temperature is varying, and it is dependent on the hydration rate. It is decided based on the previous analyses results as: 1) ambient temperature -6 ºC: T1 (heating specimen), 2) ambient temperature -6 ºC: T2 (no heating, hence frozen), 3) ambient temperature 10 ºC: T3 (room temperature). Ambient temperature history with heating plate setup is presented in Fig. 6. Fig. 6Temperature control setup for the ambient and heating plate temperature history a) Plates setup for auto healing b) The ambient temperature history with heating plate setup 4.2. Measured results of test specimens investigated compared with Bayesian inference prediction The hydration of heated specimen predicted based on linear and quadratic Bayesian inference models are compared with experimental results of hydration, which is directly related with the strength and stiffness of the concrete structures (Fig. 7). Shown in the Figure 7, from the 3rd date (heated curing started from the date), even though the heated specimen are under cyclic freeze-thaw, they show higher degree of hydration than frozen ones. Concrete hardens as a result of the chemical reaction between cement and water known as hydration. This produces heat and is called the heat of hydration, which increases internal temperature while hydrated. In general higher cement contents result in more heat development. Normal, heavyweight and mass concrete states that as a rough guide, hydration of cement will generate a concrete temperature rise of about 4.7°C to 7.0°C per 50 kg of cement per m^3 of concrete (10°F to 15°F per 100 lb of cement per yd^3 of concrete) in 18 to 72 hours. This is the reason of monastically increasing degree of hydration while ambient temperature fluctuates in Figure 7. Fig. 7Predicted and measured degree of hydration for the specimens a) Linear prediction model with experimental results b) Quadratic prediction model with experimental results The difference between the analysis and the experiments might be largely affected by the boundary conditions. The node 106 of specimen (4 side heated) shows much higher stiffness and hydration than the 3 side heated specimen (node 38). Therefore the best location of heated plates can be the top surface of the structure. If locating the plates on the top surface, because we select the two most severe locations from the structure, the most surface area of the considered structure can be treated as node 106 specimen. It is notable that linear prediction models show bigger difference compared with experimental results in all of three room temperature, frozen, and heated condition specimens as shown 8.9 % difference at 9th day, while quadratic model provides rather close predicted values in terms of degree of hydration as shown 5.5 % difference averagely. However quadratic model shows tendency of increasing difference as time goes without experimental data from 8th date as well, which reveals there are needed expanded prediction models in order to optimize prediction model for missing or future prediction. 5. Conclusions An innovative solution by auto healing of concrete structures damaged due to the cyclic freeze-thaw has been tested and compared with statistical prediction models. The results of models, compared with measurement of damages by cyclic freezing and thawing, show successful recovery of strength and stiffness. The proposed Bayesian models leads to a probabilistic prediction of the future degradation of porous material based on prior probability density functions via Markov chain Monte Carlo simulations. Between two predictive models, the linear prediction model shows bigger difference compared with experimental results in all specimens, while quadratic model provides closer prediction in terms of degree of hydration, showing 62 % improvements compared with linear model. The suggested solution could be applied to a general purpose as well, in terms of modeling management of dynamic malfunction of network or improved control of government/corporation budget, which could save a great deal of work by expanding applications. • Cho T. Prediction of cyclic freeze–thaw damage in concrete structures based on response surface method. Construction and Building Materials, Vol. 21, Issue 12, 2007, p. 2031-2040. • LaFrance-Linden D., Watson S., Haines M. J. Threat assessment of hazardous materials transportation in aircraft cargo compartments. Transportation Research Record 1763, TRB, Washington (DC), National Research Council, 2001, p. 130-137. • Sundararajan C. Probabilistic Structural Mechanics Handbook. Chapman & Hall, London, 1995. • Nathan O. S., Dana L. K. Bayesian parameter estimation in probabilistic risk assessment. Reliability Engineering & System Safety, Vol. 62, Issue 1-2, 1998, p. 89-116. • Maekawa K., Chaube R. P., Kishi T. Modelling of Concrete Performance. E and FN SPON, London, 1999. • Fagerlund G. Equations for calculating the mean free distance between aggregate particles or air-pores in concrete. CBI Research 8:77, Swedish Cement and Concrete Research Institute, Stockholm, • Penttala V. Freezing-induced strains and pressures in wet porous materials and especially in concrete mortars. Advanced Cement Based Materials, Vol. 7, Issue 1, 1988, p. 8-19. • Cho T. A numerical model for the freeze-thaw damages in concrete structures. Journal of Korean Concrete Institute, Vol. 17, Issue 5, 2005, p. 857-868. • Nakarai K., Ishida T., Kishi T., Maekawa K. Enhanced thermodynamic analysis coupled with temperature-dependent microstructures of cement hydrates. Cement and Concrete Research, Vol. 37, Issue 4, 2007, p. 139-150. • Press J. S. Bayesian Statistics: Principles, Models and Applications. Wiley, New York, 1989. About this article Bayesian inference cyclic freeze-thaw damaged micro-pore structures nonlinear stochastic prediction models This research was a part of the project titled “Development of the Advanced Technology of Nuclear Power Plant Structures Quality on Performance Improvement and Density Reinforcement (201016101004L)” in the Nuclear Power Technology Development Project funded by Korea Institute of Energy Technology Evaluation and Planning (KETEP) and the Ministry of Knowledge Economy, Korea. Copyright © 2013 Vibroengineering This is an open access article distributed under the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
{"url":"https://www.extrica.com/article/14564","timestamp":"2024-11-08T01:58:44Z","content_type":"text/html","content_length":"116609","record_id":"<urn:uuid:e9d83ac2-f3dd-4b1c-ba4b-bb3ab58084fe>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00870.warc.gz"}
Home Page > API Reference > General Geometry > Surfaces > IPartialSplineProps_DG Go to DGKC docs Search Documentation IPartialSplineProps_DG Interface Used to access per parameter properties of a BSpline surface via IBSplineSurface_DG.GetAxialSplineProps() Indices are 0-based See also: IBSplineSurface_DG, IBSplineCurve_DG int GetDegree() void SetDegree(int degree) int GetKnotCount() double GetKnot(int index) IArrayDouble_DG GetKnots() void SetKnot(int index, double knot) void SetKnot1(int index, double knot, int multiplicity) multiplicity must be <= degree and greater than the previous multiplicity of the knot. The index must be in [0, GetKnotCount()-1] range. void SetKnots(IArrayDouble_DG knots) int GetMultiplicity(int indexKnot) The indexKnot must be in [0, GetKnotCount()-1] range. IArrayInt_DG GetMultiplicities() void SetMultiplicity(int indexKnot, int multiplicity) Increases multiplicity of the knot. multiplicity must be greater than the previous multiplicity and <= degree of the surface in this parametric direction. The indexKnot must be in [0, GetKnotCount()-1] range. void SetMultiplicity1(int indexKnot0, int indexKnot1, int multiplicity) Increases multiplicity of knots with indices in the [i0, i1] range. multiplicity must be greater than the previous multiplicity and <= degree of the surface in this parametric direction. The indexKnot0 and indexKnot1 must be in [0, GetKnotCount()-1] range. int GetPoleCount() bool IsClosed() bool IsPeriodic() void SetPeriodic(bool periodic) bool IsRational() double GetEndParameter(bool first) void InsertKnot(double u, int multiplicity, double tolerance, bool add) If the knot (up to tolerance) already exists in the table, its multiplicity increased by multiplicity, if add is true or increased to multiplicity, if add is false. multiplicity must be a positive number not greater than the degree. void InsertKnots(IArrayDouble_DG knots, IArrayInt_DG multiplicities, double tolerance, bool add) If a knot[i] (up to tolerance) already exists in the table, its multiplicity increased by multiplicities[i], if add is true or increased to multiplicities[i], if add is false. multiplicities must contain only positive numbers not greater than the degree. void RemoveKnot(int index, int multiplicity, double tolerance) Reduces multiplicity of the knot to the specified value. If multiplicity is 0, the knot is removed. The index must be in [0, GetKnotCount()-1] range. void Reverse() Changes orientation of this BSpline surface in the parametric direction. The bounds of the surface are not changed, but the given parametric direction is reversed. Hence the orientation of the surface is reversed. The knots and poles tables are modified. void SetRange(double min, double max) Segments this curve between min and max. Either of these values can be outside the bounds of the curve, but max must be greater than min. All data structures of this curve are modified, but the knots located between min and max are retained. The degree of the curve is not modified. Warnings : Even if this curve is not closed it can become closed after the segmentation for example if min or max are out of the bounds of the curve or if the curve makes a loop. void SetOrigin(int indexKnot) Assigns the knot to be the origin of this periodic BSpline surface in this parametric direction. As a consequence, the knots and poles are modified. Raises an exception if this BSpline surface is not periodic in the parametric direction or index is outside the bounds. The indexKnot must be in [0, GetKnotCount()-1] range. int GetEndKnotIndex(bool first) IArrayDouble_DG GetKnotSequence() int GetKnotDistributionType() Returns: 0 (Non Uniform), 1 (Uniform), 2 (Quasi Uniform) or 3 (Piecewise Bezier). If all the knots differ by a positive constant from the preceding knot, the BSpline surface can be : Uniform if all the knots are of multiplicity 1; QuasiUniform, if all the knots are of multiplicity 1, except for the first and last knot, which have multiplicity degree + 1; PiecewiseBezier if the first and last knots have multiplicity degree + 1 and if interior knots have multiplicity degree. Otherwise the surface is non uniform in the direction. void FindParameter(double u, double tolerance, out int i0, out int i1, bool withKnotRepetition) If withKnotRepetition is false, returns in i0 a 0-based index of the first knot, which is not greater than u. If u counsides with the knot up to the tolerance, i1 will be set equal to i0, otherwise i1 will be set equal to i0 + 1. If u is smaller than the first knot less tolerance, i0 = -1, i1 = 0 is returned. If u is greater than the last knot plus tolerance, i0 = 'size of the knot array', i1 = i0 + 1 is returned. If withKnotRepetition is true, the knot array in the above is considered modified by inserting copies of knots with multiplicity greater than one.
{"url":"https://dynoinsight.com/Help/V7_1/AX/Interface/Geometry/Surfaces/IPartialSplineProps_DG.aspx","timestamp":"2024-11-14T15:35:54Z","content_type":"application/xhtml+xml","content_length":"103023","record_id":"<urn:uuid:93c84df7-a7ce-4c73-beef-6462868dcdc1>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00279.warc.gz"}
Joeun Jung Ph.D. (2015) Cornell University First Position Researcher, PARC (PDE and Functional Analysis Research Center) of Seoul National University Iterated trilinear fourier integrals with arbitrary symbols Research Area harmonic analysis My research interests fall under the broad categories of harmonic analysis and partial differential equations, with a particular focus on multilinear harmonic analysis with time-frequency analysis techniques. I have been particularly interested in understanding the relationship between Lp estimates of multilinear singular operators and the dimension of the singularity sets of their symbols as well as applications to non-linear partial differential equations and other fields. In this paper, I prove Lp estimates for trilinear multiplier operators with singular symbols. These operators arise in the study of iterated trilinear Fourier integrals, which are trilinear variants of the bilinear Hilbert transform. Specifically, I consider trilinear operators determined by multipliers that are products of two functions m1 ([xi]1 , [xi]2 ) and m2 ([xi]2 , [xi]3 ), such that the singular set of m1 lies in the hyperplane [xi]1 = [xi]2 and that of m2 lies in the hyperplane [xi]2 = [xi]3 . While previous work [15] requires that the multipliers satisfy $chi$[xi]1 <[xi]2 $\cdot$ $chi$[xi]2 <[xi]3 , my results allow for the case of the arbitrary multipliers, which have common
{"url":"http://www.mathlab.cornell.edu/m/node/562","timestamp":"2024-11-09T06:30:08Z","content_type":"text/html","content_length":"26290","record_id":"<urn:uuid:b4b88244-7367-48c5-880f-e5617fa75c80>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00409.warc.gz"}
Binary Search Tree - CSVeda Binary tree is a non-sequential or non-linear data structure used to represent hierarchical relationship among elements. You can add maximum two child nodes under any node of the binary tree. Binary Search Tree is basically a Binary Tree which follows these rules. • All the nodes in the left subtree of a node have values smaller than the node • All the nodes in the right subtree of the node have values bigger than the node. Difference between Binary Tree and Binary Search Tree Insertion in a Binary Tree Nodes are added in a binary tree without following any specific pattern. To insert a node in binary tree you need to specify its parent and also how the node is related to the parent (left child or right child). So, insertion process is complex in case of binary tree since it involves finding the parent by any one traversal technique. Insertion in a Binary Search Tree Nodes are added in a binary search tree to maintain the pattern of BST. That is smaller nodes in the left subtree and bigger nodes in the right subtree. To insert a node in binary search tree you only need to specify the node to be inserted. A Search Operation will be first performed to find the location of the node that qualifies to become parent of the new node. This is done by comparing nodes with the new node starting from the root node. If the new node is smaller than the current node, it must be added in the left subtree. If the new node is bigger than the current node, it must be added in the right subtree. Deletion in a Binary Tree To delete a node from the binary tree the tree has to be searched to find the location of this node and location of its parent. For this, any one traversal technique – Preorder, Postorder or Inorder, can be used to find the node to be deleted. Once the node is identified it can be deleted by updating the corresponding child pointer of its parent. Deletion in a Binary Search Tree Nodes can be deleted in a binary search tree by first searching the node to be deleted and identifying its parent. Search is performed by comparing a node with the node to be deleted beginning from the root node. If it is smaller than the current node, it will be present in the left subtree else it will be located in the right subtree. This comparison and searching in either the left or right subtree continues till the node to be deleted is found. While doing this process the previous node is preserved in a separate pointer to maintain the information about the parent node of the current node. Be First to Comment
{"url":"https://csveda.com/binary-search-tree/","timestamp":"2024-11-09T01:18:20Z","content_type":"text/html","content_length":"62431","record_id":"<urn:uuid:3e92aabc-32a2-4ba2-b107-56e576ad38c0>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00680.warc.gz"}
Output Frequency of CN0349 Category: Hardware Product Number: CN-0349 my question has been asked several times in different versions, but none of the answers given seem to help in my case. My problem is the output frequency of "vanilla" CN0349 Boards. It seems to me, that with the 1Mhz oscillator connected to MCLK of the AD5934, it is not possible to programm any values greater than approx. 7250 Hz into the Start Frequency Registers. My logic so far is that with the euqation given on page 13 of the AD5934 datasheet, the maximum value programmable to the AD5934 in Hz, which is still <= 24 Bit is about 7250. I came across this problem, because my measurements were consistantly wrong and all of them wrong in the same way. Now i've tested this theory and so far it seems to be true. As long as i program values below the 7250 Hz threshold, i can use the registers and get the desired output frequency. BUT for some unkown reason, which i cannot find an answer to, at about 30 kHz the Board just randomly changes direction and lowers the frequency instead of increasing it. Now it would be just great if anybody could help me out. My 2 questions beeing: 1. Am i right with my assumption that with the given equation on page 13 of the AD5934 datasheet, it is not possible to programm values greater than ~7250 Hz, since this is the maximum value still equal to/under 24 bit? 2. Why is it not possible to increase the frequency above 30/31 kHz? Are there any ways to work around this problem? If you have any questions i will gladly answer them. Thanks in advance. The maximum frequency you can get from the AD4934 is by writing 0xFF into all 3 registers. With the 1 MHz MCLK frequency used on the CN-0349 board, from the formula on page 13 of the datasheet you mentioned, the AD5934 would produce the maximum frequency of about 7812.5 Hz. To make the board operate at 30 KHz it would be necessary to remove the U6 FXO-HC536R-1 1 MHz oscillator and replace it with a higher frequency one (for example, FXO-HC536R-16) or wire in the MCLK signal from some higher frequency external source such as lab generator, etc. If the datasheet 16 MHz MCLK frequency is used, the maximum frequency the AD5934 would be able to generate is about 125 KHz. Thank you so much. I just dont understand how someone would NOT write something like that in the datasheet. Would have saved me days of work finding that out the hard way.. Anyways, thanks again. Yes, unfortunately the documentation is not clear and there are quite a few behaviors in this chip family that are completely undocumented. Bumping into those in the middle of a project does create serious setbacks. Documentation limits marketplace acceptance of these otherwise rather unique chips. Best of luck with the rest of your project, please do not hesitate to ask if you have any further questions. I had a look through the Circuit Note and the datasheet. It is true - it would have been a good idea to explicitly state the frequency range and resolution of the CN0349 given the 1 MHz master clock. It also looks like there's little to no mention of changing the frequency from a default value of 2Khz. The only mention in the note is "This oscillator allows the AD5934 to excite the conductivity cell with a frequency of 2 kHz, which is well suited for conductivity measurements." (Does the supplied GUI even allow the frequency to be changed?) Again, I absolutely agree that this should have been documented better in the circuit note. But I don't see any ambiguity in the AD5934 datasheet - all formulas are expressed in terms of MCLK frequency, with 16 MHz used as an example. The last paragraph on page 29 also explains when you might want to use a lower MCLK frequency. I dug into the no-OS and Linux driver documentation as well. The only bare metal drivers for the AD5934 seem to be here: https://wiki.analog.com/resources/tools-software/uc-drivers/microchip/ad5933 . (note the title is actually for the AD5933.) This should be considered legacy code, any new development would be in the no-OS repository. It looks like the AD5933 IS supported in no-OS, and the exernal clock frequency is defined here: and can be changed through the driver functions: Similarly, the external clock must be defined for the Linux driver: Once the MCLK is defined for these drivers, you should be able to set / read back the frequency properly. Acknowledging this was frustrating for @muzot , hope this helps in the future. all formulas are expressed in terms of MCLK frequency, with 16 MHz used as an example. Indeed those formulas seem relatively straightforward, but evidently not to all users. The last paragraph on page 29 also explains when you might want to use a lower MCLK frequency. The last paragraph on page 29, however, is an example of a rather ambiguous text: it refers to 16.776 MHz as the "nominal clock frequency," while the rest of the datasheet text uses 16 MHz as typical MCLK frequency. 16.776 MHz is the frequency of the AD5933 internal oscillator, so this is the frequency that cannot "be scaled down" as it is not under user's control. When this passage is found in the AD5934 datasheet it is even more confusing as the AD5934 does not have the internal oscillator. And nowhere in the AD5934 datasheet it says that on power up the chip is set to operate from the clock coming from this non-existent internal oscillator. Naturally, this clock is not coming and the chip appears dead. So it is user's responsibility to enable it's operation from the external clock by programmatically setting bit D3 in the control register, but how one would know? This is more of an after thought, but I turned that formula into a little Excel spreadsheet to help with some quick calculations based on the MCLK frequency. Note this doesn't take into consideration any absolute maximum inputs/outputs, its purely that equation just solved for Fout.XLSX In an ideal world, the customer shouldn't be have to pick through every bit, there should be device drivers that handle that sort of thing. Well yes and no. I am writing my own drivers and external drivers would be great as a reference but they would not help me out directly. A correct datasheet on the other hand would have saved me days and weeks of work on this project. There are so many "mysteries" regarding just this frequency topic that its mind boggling to me. From a customer/engineer point of view: As I read through the datasheet (AD5934) for the first time, I was certain that the clock speed was at 16.776 Mhz as it was stated. There was no mention of the AD5933 and I only realized that this was nonsense after i read an answer to a different question from Snorlax . After going through my entire project and changing everything to 16 MHz (again, as stated in the datasheet) I was certain (again..) that this was the right way to do it and that it would solve my problems. There was no mention of another clock beeing connected to MCLK other than a 16 MHz one (to be fair enough, it is stated in the CN0349 datasheet, but not in a way you would remember after reading through many pages of two different datasheets) and tbh my only explanation would be, that it wouldnt sound as great if you wrote down that this "vanilla" device only manages to get up to a frequency of about 30 kHz (which i wouldnt have minded were it written down). There is NO mention (and i mean absolutely no mention) of a 1 Mhz clock changing the output code after the calculations mentioned on page 13 ff to a message greater than 24 Bit, which means you cannot "normally" (as in: described in the datasheet) write your start frequency / increment to the registers. Besides the many hours I had to put in just to fully understand what is meant by all these vague statements i had to change my code and setup mutliple times for no apparent reason. Which is just my point. This device is great. It does what it should and it works exceedingly well. IF you know how to use it. Which requires a readable, understandable datasheet. There are many more ambiguous statements (not just regarding the frequency). It would be just great if these datasheet were written from a customer POV, so someone who bought this product could actually use it right away without finding deviations for no apparent reason. Sorry for this rant but as you can see there was quite some steam building up. If you or any other Analog folks are willing to work these problems out, i am sure myself or even Snorlax are more than willing and able to help write these datasheet in an understandable way to improve the customer experience. Anyways, thanks for all your help again. It seems to work for now and if I stumble upon any more problems, I will contanct you guys again. Have a great day, there should be device drivers that handle that sort of thing. The overwhelming majority of users corresponding in this forum on this subject are building their systems with standalone microcontrollers, Arduinos, etc. directly communicating with the chip. Very few are using the evaluation board - there seem to be no use cases for these chips that would involve PC. Some look for ways of hooking up Arduino I2C lines directly to chip bypassing the board controller of the eval board and, perhaps, sidestepping driver issues. Sorry for this rant No need to be sorry - it is important to provide honest feedback. Many forum participants seem to give up on their projects based on these chips after similar experience. It is extremely frustrating typing in the higher frequency only to see the chip generating a lower one because the the resulting codes were longer than 24 bit and got truncated at the top without a warning and this behavior not mentioned in the documentation. Imagine how many more outside this forum tried and gave up - not everyone has your stamina to wrestle with this for weeks and write a page-long assessment. Best of luck with the rest of your project, hopefully it is going to be smoother sailing from here. Please keep in touch in case some underwater rocks come up. Well yeah exactly. I am going through posts on this site for quite some time now and i cannot understand how the datasheet was never updated altough so many people stumble upon the same problems... Anyways, one question occured after changing my setup/code. For some reason the real/imaginary data i read out after reaching the last increment point (Frequency sweep complete) is always wrong and thats the case for every voltage. Doesnt matter which frequency is the last measurement point. I can easily work around that by programming one more increment number into the register so it never reaches the "real" last frequency point. Would still be interesting to me why this problem occures. Thats probably not the real problem. It seems like the real/imaginary data i can read out after increasing the frequency thorugh the sweep options changes in a static way. What I mean by that is that no matter where i start, I get the same results. Always starts with the desired cell constant of my conductivity probe and then the cell constant slowly decreases (i wrote an algorith which goes through all frequencies and voltages to find the optimal setup). Initally i thought this is an effect of the change in frequency which obiously changes the measured resistance in sulution but the fact that it doesnt matter at which frequency i start/stop is just strange.. Doesnt even matter what the amount of frequency increment is. As for my routine: 1. Calibrate the GF/NOS etc for both realms and all voltages 2. Send frequency data measure temperature 3. Initialize (standby, init, start) 4. read out data (check res register, get res) 5. increase frequency 6. repeat frequency for all voltages 7. start at 4 again Is there something fundamentally wrong with this routine? I checked if the output frequencies/voltages are right multiple times and they are. Can't quite figure this one out. Well yeah exactly. I am going through posts on this site for quite some time now and i cannot understand how the datasheet was never updated altough so many people stumble upon the same problems... Anyways, one question occured after changing my setup/code. For some reason the real/imaginary data i read out after reaching the last increment point (Frequency sweep complete) is always wrong and thats the case for every voltage. Doesnt matter which frequency is the last measurement point. I can easily work around that by programming one more increment number into the register so it never reaches the "real" last frequency point. Would still be interesting to me why this problem occures. Thats probably not the real problem. It seems like the real/imaginary data i can read out after increasing the frequency thorugh the sweep options changes in a static way. What I mean by that is that no matter where i start, I get the same results. Always starts with the desired cell constant of my conductivity probe and then the cell constant slowly decreases (i wrote an algorith which goes through all frequencies and voltages to find the optimal setup). Initally i thought this is an effect of the change in frequency which obiously changes the measured resistance in sulution but the fact that it doesnt matter at which frequency i start/stop is just strange.. Doesnt even matter what the amount of frequency increment is. As for my routine: 1. Calibrate the GF/NOS etc for both realms and all voltages 2. Send frequency data measure temperature 3. Initialize (standby, init, start) 4. read out data (check res register, get res) 5. increase frequency 6. repeat frequency for all voltages 7. start at 4 again Is there something fundamentally wrong with this routine? I checked if the output frequencies/voltages are right multiple times and they are. Can't quite figure this one out. There seems nothing wrong with the routine. Conductivity measurements can be tricky as the measurement cells tend to drift, especially with aqueous solutions. Hard to give any useful recommendation not knowing enough about your particular setup. As you apply different excitation voltages across your cell, depending on the nature of your electrodes at higher voltages you might trigger some electrochemical reactions that are likely to affect your results. If you have access to an oscilloscope - as your circuit goes through the measurement routine - it is always a good idea to check the voltage waveform at the AD5934 RFB pin and at the output of U2B OPAMP - those should nearly identical (ignoring 180º phase) well-formed sinusoids without any distortion or clipping at all frequencies and voltages. Generally speaking, it is useful to keep some linear "equivalent load" with the impedance similar to the expected impedance of your cell, usually a resistor or a sequential RC. If the results from the cell look strange, it is always useful to connect this "load" and see if the circuit performs as expected with the load, then the issues are with the cell and not the electronics. Feel free to post your data through Insert- > Image/video/file as a text or csv file if you think it would be useful for somebody to look at it. Re and Im data as it comes from the chip, prior other calculations, would be particularly useful. Thank you for your reply! As this is a work project I cannot disclose much more than I already have. What exactly do you mean by equivalent load? Would a precision resistor be enough? I have not yet tried that for some reason, but I will do so today or tomorrow and tell you how it went. I will also check the outputs of the RFB and OPAMP output Pin. If these waveforms do not match or do not look good, what exactly would be my conclusion? Something wrong with the chip itself? As for the voltage part and electrochemical reactions, it is not entirely unprobable but I would highly doubt effects like these taking place in standard KCl solutions. Nethertheless, I will start with a measurement with the low voltages (0.2 and 0.4 V) to test this. Thanks again. Ok so i checked if everything went fine with a resistor connected to the pins for the cell and i had no problems measuring the right value across the entire voltage/frequency/realm range. After doing that i connected my cell again and lo and behold, it worked fine. Probably a faulty cable of something like that. So the problem is fixed for now / never existed. What exactly do you mean by equivalent load? Yes, 1% or better yet 0.1% resistor would be a good "load" to make sure that the electronics is functioning as expected when encounter strange behaviour from the real-life measurements cell. Depending on the construction of your cell you might want to use a sequential RC circuit that would have complex-valued, frequency-dependent impedance (the resistor is nearly 100% real-valued and frequency-independent impedance) that could reflect some of the frequency-dependent behaviour of a real measurement cell. If these waveforms do not match or do not look good, what exactly would be my conclusion? If you see sine wave clipped at the top and/or bottom - this means that the system has too much gain due to too low impedance of your measurement cell - reducing the value of Rfb resistors to get the sive wave back in linear range would be required. If you see that the sine wave has a distorted shape, for example, starting to resemble triangular or saw-tooth waveform - the response form your measurement cell is non-linear while the impedance is supposed to be linear, so the measurement results are not quite valid (typically these would not be visible on the oscilloscope, one would need to measure higher harmonics content in this waveform with a spectrum analyzer). If the waveform is "hairy" with lots of high-frequency noise and spikes - the measurement setup is picking up too much environmental noise and interference, so better shielding is required. effects like these taking place in standard KCl solutions. If you have electrodes with metal surfaces exposed to the solution it is nearly certain that you trigger some electrochemical reactions if your excitation voltage as above several hundred millivolts Great, best of luck with the rest of your project!
{"url":"https://ez.analog.com/reference-designs/f/q-a/562795/output-frequency-of-cn0349/473312","timestamp":"2024-11-14T10:21:00Z","content_type":"text/html","content_length":"324587","record_id":"<urn:uuid:0f93bc62-c217-4d14-881c-dcaeb98e4516>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00476.warc.gz"}
Compound Interest Calculator: Calculate your interest Compound interest calculator Use our compound interest calculator to calculate how your assets will develop over time. No data available for chart Reading aid If you begin with a starting capital of CHF 10’000, and consistently save CHF 100 each month, with your funds compounding at an annual interest rate of % over a period of 20 years, your total accumulated capital will be . This amount comprises in total deposits, which includes your initial investment and the sum of your monthly contributions, along with earned in interest. Investments are always subject to a potential risk of loss, especially with a short-term investment horizon. The return on investments can vary greatly depending on the market environment and can also be negative under unfavourable market conditions. The compound interest effect and its significance What is compound interest? Compound interest is a basic financial concept that involves the calculation of interest not only on the original amount invested, but also on the interest already accrued. The influence of time on compound interest The compound interest effect not only has a greater impact with higher and more frequent interest payments, but also with longer investment periods. The exponential dynamics of compound interest unfold particularly impressively over a period of years. The decisive factor is, of course, the interest rate. At an interest rate of 0.5 percent, the effect may seem small. It would take several decades for the initial capital to double despite compound interest. With an interest rate of 5 percent, on the other hand, it takes less than 15 years for the capital to double. How do you make the most of the compound interest effect? As we have seen, the compound interest effect offers many advantages. To optimize the result, you should pay attention to the following points and use them wherever possible: • Choose an investment option (if possible) that pays interest during the year rather than annually. The compound interest effect is enhanced by reinvesting the interest more frequently. • Do not withdraw the interest, but leave it in the investment. This allows you to make the most of the compound interest effect. • Start with a certain initial investment. The higher the starting balance, the greater the compound interest effect from the outset. • Let time work for you. The longer (and therefore more often) a balance earns interest, the faster your assets will grow. • Avoid high fees, because low costs directly benefit the return. How do you calculate compound interest? So if you have capital and patience, you can use the compound interest effect to increase your money without much effort. Simple mathematical formulas can be used to calculate how much compound interest will earn on your money over the years. The formula for calculating the final amount with compound interest is: FV = PV × (1 + ¹⁄₁₂R)^(T×12) where, with monthly interest calculation: FV is the future value PV is the present value, r is the interest rate per period (return) and t is the time in years Expand your investment options with ETFs While traditional savings accounts and fixed-income investments offer a degree of security, exchange-traded funds (ETFs) are an option for investors seeking higher long-term returns. Combined with the compound interest effect, ETFs can help you build wealth in the best possible way. The majority of ETFs used by True Wealth are accumulating ETFs, i.e. the current income is already reinvested on the ETF side. In the case of distributing ETFs, our rebalancing ensures that the income is invested in line with the investment strategy. This keeps your portfolio on track and makes optimum use of the compound interest effect.
{"url":"https://www.truewealth.ch/compound-interest-calculator","timestamp":"2024-11-13T18:54:15Z","content_type":"text/html","content_length":"173654","record_id":"<urn:uuid:97cab806-2323-4323-a4a1-18ed5052d03e>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00601.warc.gz"}
Confusion Matrix and Accuracy You chose a cutoff of 0.5 in order to classify the customers into ‘Churn’ and ‘Non-Churn’. Now, since you’re classifying the customers into two classes, you’ll obviously have some errors. The classes of errors that would be there are. • ‘Churn’ customers being (incorrectly) classified as ‘Non-Churn’. • ‘Non-Churn’ customers being (incorrectly) classified as ‘Churn’. To capture these errors, and to evaluate how well the model is, you’ll use something known as the ‘Confusion Matrix’. A typical confusion matrix would look like the following. This table shows a comparison of the predicted and actual labels. The actual labels are along the vertical axis, while the predicted labels are along the horizontal axis. Thus, the second row and first column (263) is the number of customers who have actually ‘churned’ but the model has predicted them as non-churn. Similarly, the cell at the second row, the second column (298) is the number of customers who are actually ‘churn’ and also predicted as ‘churn’. Note that this is an example table and not what you’ll get in Python for the model you’ve built so far. It is just used as an example to illustrate the concept. Now, the simplest model evaluation metric for classification models is accuracy – it is the percentage of correctly predicted labels. So what would the correctly predicted labels be? They would be. • ‘Churn’ customers being actually identified as churn. • ‘Non-churn’ customers being actually identified as non-churn. As you can see from the table above, the correctly predicted labels are contained in the first row and first column, and the last row and last column as can be seen highlighted in the table below: Now, accuracy is defined as: Hence, using the table, we can say that the accuracy for this table would be: Now that you know about confusion matrix and accuracy, let’s see how good is your model built so far based on the accuracy. But first, answer a couple of questions. So using the confusion matrix, you got an accuracy of about 80.8% which seems to be a good number to begin with. The steps you need to calculate accuracy are: • Create the confusion matrix. • Calculate the accuracy by applying the ‘accuracy_score’ function to the above matrix. Coming Up So far you have only selected features based on RFE. Further elimination of features using the p-values and VIFs manually is yet to be done. You’ll do that in the next section.
{"url":"https://www.internetknowledgehub.com/confusion-matrix-and-accuracy/","timestamp":"2024-11-09T23:06:39Z","content_type":"text/html","content_length":"81010","record_id":"<urn:uuid:16a0934e-fdab-4828-b173-e5f1506117a9>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00252.warc.gz"}
BEST MACD indicator trading strategies 2024 BEST MACD Trading Strategies The MACD indicator strategy that we will explain is a popular strategy. You can use this strategy in various financial markets. This method is relatively easy to understand. In addition, this method can be applied to various market conditions, and most importantly has the potential to generate profits if used correctly. MACD indicator MACD (Moving Average Convergence Divergence) is one of the most popular and widely used technical indicators, both by experienced traders and beginners. Its use is to identify market trends using moving averages.. Menguasai Sinyal: Memahami Komponen Pembentuk MACD Adding the MACD indicator to the trading chart is just the first step. To truly exploit its potential in trading, it is important to understand how this indicator works. MACD consists of four main components, which we will discuss in order to interpret the signals produced. 1. MACD line 2. Signal line 3. Histogram Line 4. Zero line • MACD Line: This line is usually blue and represents the average price movement over 12 periods. • Signal Line: Usually orange, this line is the moving average of the MACD line itself, but over a longer period of time, namely 26 periods. • Histogram: Colored bars showing the difference between the MACD line and the signal line. The closer together the two lines are, the shorter the histogram (and vice versa). • Centerline (Zero Line): The horizontal line which is generally in the middle of the MACD chart, represents the zero point. "Until now that you know from the 4 components in the MACD indicator, let's make sure to know how to make it work." Looking for Trends with MACD: Reading Crossover Signals and Histograms MACD and Signal Line Crossover: • Upward Crossover: If the MACD line crosses above the signal line, this indicates a potential uptrend. This means that the momentum of price movements is strengthening upwards. • Downward Crossover: On the other hand, if the MACD line crosses below the signal line, the trend will likely reverse to down. The momentum of price movements is weakening. • Enlarged Histogram Bars: The higher and thicker the histogram bars, the stronger the momentum of the current trend, be it an uptrend or a downtrend. • Shrinking Histogram Bars: Conversely, shorter and thinner histogram bars indicate weakening trend momentum. Capturing Trends with MACD Indicator The MACD (Moving Average Convergence Divergence) indicator is known for its ease of use. However, many traders are trapped in only using MACD. In fact, MACD's best performance appears when the trend is dominating the market. For example, in an uptrend, MACD accurately predicts upward price movements. On the other hand, in a downtrend, MACD sometimes still gives a buy signal even though prices are falling. Overcoming MACD False Signals For example, if you are a long (buy) trader, then ideally you should only trade when the trend is up. Going against the trend is the same as going against the flow, and there is a high risk of experiencing losses. Confirming Trends with the 200 Day Moving Average The solution is easy: add a 200 day moving average indicator. By adding the 200 day moving average, you will only see one line on the chart, as in the following image. If the price is above that line, it means that the market is in an uptrend. If the price is above or below that line, and vice versa, for example, the figure below is an uptrend and a downtrend. By combining the MACD and the 200-day moving average, you can make better trading decisions by considering the overall trend direction.
{"url":"https://www.vfxalltool.com/2023/01/best-macd-indicator-trading-strategies.html","timestamp":"2024-11-14T13:11:25Z","content_type":"application/xhtml+xml","content_length":"148003","record_id":"<urn:uuid:32a21505-2374-43d1-a6c8-a8cfc5f0175c>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00861.warc.gz"}
Discount Calculator | Find the Discount Percentage - Orchids Discount Calculator is a free online calculator tool by which you can find the percentage of discounts with the help of cost price and selling price. It makes your calculation easy and accurate. It also displays the result in a fraction of a second. When you go shopping and see any dress whose price was 1900, you get it in just 1200. Do you know how much discount you are getting? It may take time to calculate it manually, but the Discount calculator makes your task hassle-free. To find the real deal, you get the discount result in seconds using this calculator. What is Discount? A discount is when a certain amount or percentage is taken off of an item's regular price. When you go shopping during a festival, they will give you some amount of reduction on the original price. That is called a discount. Example of Discount Suppose you are at the supermarket, and you want to buy cookies. The cost of cookies is 650, but you get them for just 420 rs. Then what is the discount you got? Given values are : • Cost Price = 650 • Selling Price = 420 • Discount = ? • Let’s put values on formula • Discount = Cost price – Selling Price • Discount = 650 - 420 • Discount = 230 • Discount Percentage = 35 %
{"url":"https://www.orchidsinternationalschool.com/calculators/basic-calculator/discount-calculator","timestamp":"2024-11-07T01:05:52Z","content_type":"text/html","content_length":"27283","record_id":"<urn:uuid:04fddc75-9b31-4349-90a6-2c2e8999f721>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00453.warc.gz"}
Notturno Seminar: Popper and the Open Society 23 July 2017 – Paul Meehl: “The Problem is Epistemology, not Statistics” All papers etc. referred to on this page can be found in PDF format on bit.ly/Notturno-Seminar. A paper co-authored by me highlights the need for some Popperian analysis of the recent replication crisis in the social sciences: “Falsificationism is not just ‘potential’ falsifiability, but requires ‘actual’ falsification: Social psychology, critical rationalism, and progress in science” In 1961, at a conference of the German Sociological Association, Popper was asked to give a paper on “The Logic of the Social Sciences”. Popper’s main thesis was: “The method of the social sciences, like that of the natural sciences, consists in trying out tentative solutions to certain problems”. Rather predictably, Popper was accused by Adorno of ‘scientism’—which accusation Popper had laboriously tried to preempt by pointing out all the fallatious assumptions usually made by those using this term: There is, for instance, the misguided and erroneous methodological approach of naturalism or scientism which urges that it is high time that the social sciences learn from the natural sciences what scientific method is. This misguided naturalism establishes such demands as: begin with observations and mea­surements; this means, for instance, begin by collecting statistical data; proceed, next, by induction to generalizations and to the formation of theories. It is suggested that in this way you will approach the ideal of scientific objectivity, so far as this is at all possible in the social sciences. In so doing, however, you ought to be conscious of the fact that objectivity in the social sciences is much more difficult to achieve (if it can be achieved at all) than in the natural sciences. For an objective science must be ‘value-free’; that is, independent of any value judgment. But only in the rarest cases can the social scientist free himself from the value system of his own social class and so achieve even a limited degree of ‘value freedom’ and ‘objectivity’. Every single one of the theses which I have here attributed to this misguided naturalism is in my opinion totally mistaken: all these theses are based on a misunderstanding of the methods of the natural sciences, and actually on a myth—a myth, unfortunately all too widely accepted and all too influential. It is the myth of the inductive character of the methods of the natural sciences, and of the character of the objectivity of the natural sciences. What Popper was saying, in short, was: If you want to understand how any science works, you’ll have to understand that induction (and its corollary of certain knowledge as the aim of science) will have to be given up and that there is such a thing as objectivity, but it doesn’t flow from a “well-purged mind” but is the result of a social critical process. Hayek, in his Nobel Prize Lecture of 1974, said much the same thing about ‘scientism’ as Popper, in a way that reminds one of the current predicament in the social sciences: “in the social sciences often that is treated as important which happens to be accessible to measurement”, condemning “the superstition that only measurable magnitudes can be important”. And even further: “I confess that I prefer true but imperfect knowledge, even if it leaves much indetermined and unpre­dictable, to a pretence of exact knowledge that is likely to be false.” The social sciences, meanwhile, are dealing with what has been termed a replication crisis. One the first high-profile articles to highlight a problem in the social sciences was Ioannidis’s “Why Most Published Research Findings Are False”, in which he said: Several methodologists have pointed out that the high rate of nonreplication (lack of confirmation) of research discoveries is a consequence of the convenient, yet ill-founded strategy of claiming conclusive research findings solely on the basis of a single study assessed by formal statistical significance […]. Ioannidis specifically blames over-reliance on p-values for the crisis: “Research is not most appropriately represented and summarized by p-values, but, unfortunately, there is a widespread notion that medical research articles should be interpreted based only on p-values.” He goes on to point out that prior probability, statistical power, and effect size can influence a test in such a way that even a result of p≤0,05 may have a probability of being true of less than 50%. Other authors offer similar objections. Button et al., for example, make a point about statistical power: “A study with low statistical power has a reduced chance of detecting a true effect, but it is less well appreciated that low power also reduces the likelihood that a statistically significant result reflects a true effect.” Colquhoun adds that the 5% threshold for significance tests should be drastically lowered: “if you wish to keep your false discovery rate below 5%, you need to use a three-sigma rule, or to insist on p≤0.001.” Krantz puts his finger on a widespread misconception: “A common error in this type of test is to confuse the significance level actually attained (for rejecting the straw-person null) with the confirmation level attained for the original theory.” Better statistical tools might help overcome this problem: “Statistics could help researchers avoid this error by providing a good alternative measure of degree of confirmation.” Trafimow, who banned p-values altogether in Basic and Applied Social Psychology, the journal he edits, takes this line of reasoning further than anybody else by declaring NHST as such invalid: As has been pointed out […] but ignored by the majority of quantitative psychologists […], the probability of the finding, given that the null hypothesis is true (again, this is p), is not the same as the probability of the null hypothesis being true, given that one has obtained the finding. … Remember that the goal of the significance test is to reject the null hypothesis, which means it needs to be demonstrated to have a low probability of being true. … It is now widely accepted by those quantitative researchers who are mathematically sophisticated and have expertise about the null hypothesis significance testing procedure, that it is invalid. Unfortunately, the first two statements are factually wrong, such that his conclusion shows nothing more than that badly done and wrongly interpreted methods are invalid. But even more importantly, it is the supposed aim of social scientific research that is problematic. Button et al., for example, have this to say: [T]he lower the power of a study, the lower the probability that an observed effect that passes the required threshold of claiming its discovery (that is, reaching nominal statistical significance, such as p<0.05) actually reflects a true effect. This probability is called the PPV of a claimed discovery. This focus on being able to claim a discovery, on finding an effect, on confirming a theory is unambiguously and unabashedly defended by Colquhoun, who seems to think that not relying on inductive reasoning in science would be utterly absurd: The problem of induction was solved, in principle, by the Reverend Thomas Bayes in the middle of the 18th century. He showed how to convert the probability of the observations given a hypothesis (the deductive problem) to what we actually want, the probability that the hypothesis is true given some observations (the inductive problem). … Science is an exercise in inductive reasoning: we are making observations and trying to infer general rules from them. Induction can never be certain. In contrast, deductive reasoning is easier: you deduce what you would expect to observe if some general rule were true and then compare it with what you actually see. The problem is that, for a scientist, deductive arguments don’t directly answer the question that you want to ask. This totally misunderstands the problem of induction, wrongly assumes that the probability of a hypothesis being true is of any import, and absurdly claims that Bayes’s theorem (if applied correctly) has anything to do with induction. Now, Meehl pointed pretty much all of this out 50 years ago. In his “The Problem is Epistemology, not Statistics”, he says: Significance tests have a role to play in social science research but their current widespread use in ap­praising theories is often harmful. The reason for this lies not in the mathematics but in social scientists’ poor understanding of the logical relation between theory and fact, that is, a methodological or episte­mological unclarity. And in his 1967 paper: The writing of behavior scientists often reads as though they assumed—what it is hard to believe anyone would explicitly assert if challenged—that successful and unsuccessful predictions are practically on all fours in arguing for and against a substantive theory. … Inadequate appreciation of the extreme weakness of the test to which a substantive theory T is subjected by merely predicting a directional statistical difference d>0 is then compounded by a truly remarkable failure to recognize the logical asymmetry between, on the one hand, (formally invalid) “confirmation” of a theory via affirming the consequent in an argument of form: [T⊃H[1], H[1], infer T], and on the other hand the deductively tight refutation of the theory modus tollens by a falsified prediction, the logical form being: [T⊃H[1], ~H[1], infer ~T]. Finally, it is almost comical that the inventor of NHSTs, R.A. Fisher, had for the most part preempted both the misuse of NHSTs and the inductive thinking that is so prevalent in today’s social sciences. In his The Design of Experiments, Fisher puts this in almost Popperian terms: In relation to any experiment we may speak of this hypothesis as the “null hypothesis,” and it should be noted that the null hypothesis is never proved or established, but is possibly disproved, in the course of experimentation. Every experiment may be said to exist only in order to give the facts a chance of dis­proving the null hypothesis. And, of course, Fisher also made sure to underline (in “The Arrangement of Field Experiments”) that no single test or study should be taken as grounds to claim anything, much less a “discovery”: “A scientific fact should be regarded as experimentally established only if a properly designed experiment rarely fails to give this level of significance.” The reason for this is so obvious that it really should not need to be said: No such selection can eliminate the whole of the possible effects of chance coincidence, and if we accept this convenient convention, and agree that an event which would occur by chance only once in 70 trials is decidedly “significant,” in the statistical sense, we thereby admit that no isolated experiment, however significant in itself, can suffice for the experimental demonstration of any natural phenomenon; for the “one chance in a million” will undoubtedly occur, with no less and no more than its appropriate frequency, how­ever surprised we may be that it should occur to us. The problem in the social sciences, it turns out, is really both epistemology and statistics. Fisher was more than explicit in pointing out the pitfalls. Meehl tried to remind his profession not to disregard them. Both have had less than stellar success. Leave a Reply Cancel reply Recent Comments • PeterM on Scientific methodology (German edition) • PeterM on The logic of scientific methodology • Fakten? Welche Fakten? – Wissenschaftskommunikation.de on The mental process of critical rationalism • PeterM on Die neue Freiheit
{"url":"http://www.theopensociety.net/notturno-seminar-popper-and-the-open-society/","timestamp":"2024-11-09T04:48:01Z","content_type":"text/html","content_length":"99981","record_id":"<urn:uuid:01c5b738-780e-4c86-902c-b0ebfb16866f>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00078.warc.gz"}
Understanding Mathematical Functions: How To Calculate Range Of A Func Mathematical functions are fundamental to understanding how different variables interact and change. One crucial aspect of functions is their range, which is the set of all possible values the function can output. In this blog post, we will delve into the definition of mathematical functions and the importance of understanding the range of a function. Key Takeaways • Mathematical functions are crucial for understanding how variables interact and change. • The range of a function is the set of all possible output values. • Understanding functions in mathematics is important for various applications. • Calculating the range of a function is essential for understanding its behavior. • Common mistakes in calculating range can be avoided with careful attention and understanding. Understanding Mathematical Functions In mathematics, a function is a relation between a set of inputs and a set of possible outputs with the property that each input is related to exactly one output. Functions are widely used in various fields of mathematics and have practical applications in many areas including science, engineering, and economics. A. Definition of a mathematical function A mathematical function is a rule that assigns to each input exactly one output. It can be represented by an equation, a graph, a table, or a verbal description. The input of a function is called the independent variable, and the output is called the dependent variable. B. Examples of common mathematical functions 1. Linear function: A linear function is a function that can be represented by a straight line on a graph. It has the form f(x) = mx + b, where m and b are constants. 2. Quadratic function: A quadratic function is a function of the form f(x) = ax^2 + bx + c, where a, b, and c are constants and a is not equal to 0. 3. Exponential function: An exponential function is a function of the form f(x) = a^x, where a is a positive constant and x is the independent variable. C. Importance of understanding functions in mathematics Functions are fundamental to the study of mathematics and play a crucial role in various mathematical concepts and theories. Understanding functions helps in analyzing and solving mathematical problems, modeling real-world phenomena, and making predictions. It also provides a solid foundation for advanced topics such as calculus, differential equations, and mathematical modeling. Understanding Mathematical Functions: How to Calculate Range of a Function What is the Range of a Function? The range of a function refers to the set of all possible output values that the function can produce for a given input. In simpler terms, it is the collection of all the y-values that the function can generate when x-values are fed into it. A. Definition of range The range is the complete set of all output values that the function can produce. It is denoted as "f(x)" or "y" in function notation. B. Importance of calculating the range of a function Understanding the range of a function is crucial for various mathematical and real-world applications. It provides insights into the behavior of the function and helps in analyzing its properties. Additionally, it is essential for determining the domain of the function and identifying any restrictions on the input values. C. How range relates to the output of a function The range is directly related to the output of the function. By calculating the range, we can ascertain the full range of values that the function can produce, thus enabling us to understand the function's behavior and characteristics comprehensively. Methods for Calculating Range Understanding the range of a mathematical function is crucial in various fields such as physics, engineering, and economics. The range of a function represents all the possible output values of the function. There are several methods to calculate the range of a function, including algebraic techniques, graphical methods, and using technology and calculators. A. Using algebraic techniques Algebraic techniques provide a systematic approach to finding the range of a function by analyzing the function's equation. • Substitution method: By substituting different values for the independent variable, you can determine the corresponding output values of the function and identify the range. • Interval notation: Expressing the range using interval notation allows you to represent the set of all possible output values of the function within a given interval. B. Graphical methods Graphical methods involve visually analyzing the graph of the function to determine the range. • Observing the range on the graph: By examining the vertical extent of the graph, you can identify the range of the function. • Using horizontal line test: This test helps in determining if every possible output value is covered by the function, thereby indicating the range. C. Using technology and calculators Advancements in technology have made it easier to calculate the range of a function using various software and calculators. • Graphing calculators: Graphing calculators can plot the function's graph and provide a visual representation of the range. • Computer software: Software programs like MATLAB, Mathematica, and graphing tools in Excel can perform complex calculations to determine the range of a function. Examples of Calculating Range Understanding the range of a mathematical function is essential in grasping its behavior and characteristics. Here, we will walk through the step-by-step calculation of the range for different types of functions. A. Step-by-step calculation of the range of a linear function A linear function is of the form f(x) = mx + c, where m and c are constants. To calculate the range of a linear function, we can follow these steps: • Step 1: Determine the slope (m) and y-intercept (c) of the linear function. • Step 2: If the slope (m) is positive, then the range is from the minimum y-value to positive infinity. If the slope is negative, the range is from negative infinity to the maximum y-value. • Step 3: If the y-intercept (c) is positive, then the range starts from c and extends to positive infinity. If the y-intercept is negative, the range starts from negative infinity and extends to B. Step-by-step calculation of the range of a quadratic function A quadratic function is of the form f(x) = ax^2 + bx + c, where a, b, and c are constants. Calculating the range of a quadratic function involves the following steps: • Step 1: Determine the vertex of the quadratic function using the formula x = -b/2a. • Step 2: If the coefficient of x^2 (a) is positive, then the range starts from the y-value of the vertex and extends to positive infinity. If a is negative, the range extends from negative infinity to the y-value of the vertex. C. Step-by-step calculation of the range of an exponential function An exponential function is of the form f(x) = a^x, where a is a positive constant. Computing the range of an exponential function can be done through the following steps: • Step 1: Determine the behavior of the exponential function. If a > 1, then the range is from 0 to positive infinity. If 0 < a < 1, the range is from 0 to 1. • Step 2: As x approaches negative infinity, the range becomes 0. As x approaches positive infinity, the range extends to positive infinity. Common Mistakes and Pitfalls When dealing with mathematical functions, it is important to be aware of the common misconceptions and errors that can occur when calculating the range of a function. By understanding these pitfalls, you can work to avoid them and ensure a more accurate calculation. A. Misconceptions about range One common misconception about the range of a function is that it is simply the set of all possible output values. While this is partially true, it is important to remember that the range is the set of all actual output values produced by the function for a given input. It does not include any potential or hypothetical output values that may not actually be achieved by the function. B. Errors in calculation Another common mistake when calculating the range of a function is misunderstanding the behavior of the function or misinterpreting the domain. This can lead to inaccuracies in determining the range, as well as overlooking certain output values that should be included. Additionally, arithmetic errors or miscalculations can also result in incorrect range calculations. C. How to avoid common mistakes when calculating range To avoid misconceptions and errors when calculating the range of a function, it is important to carefully analyze the behavior of the function and the constraints of the domain. Consider any limitations or restrictions on the input values, as these can impact the output values and ultimately the range. Additionally, double-check all calculations and ensure that no arithmetic mistakes have been made. Understanding mathematical functions is crucial for solving a wide range of real-world problems and for furthering our understanding of the world around us. By accurately calculating the range of a function, we can identify the possible output values and how they relate to the input values. This is essential for applications in science, engineering, economics, and many other fields. Knowing how to calculate the range of a function allows us to interpret data, make predictions, and optimize processes. It is a valuable skill that can deepen our understanding of mathematics and its applications in various industries. ONLY $99 Immediate Download MAC & PC Compatible Free Email Support
{"url":"https://dashboardsexcel.com/blogs/blog/understanding-mathematical-functions-how-to-calculate-range-of-a-function","timestamp":"2024-11-09T04:12:10Z","content_type":"text/html","content_length":"213259","record_id":"<urn:uuid:1de58023-7a85-4d63-b731-b86fba7cd1b0>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00361.warc.gz"}
38 Temporal Conditionals - Logic Philosophy Spirituality38 Temporal Conditionals 38 Temporal Conditionals a.Structure.The forms of conditional proposition of temporal modality, are very similar to those of natural modality. I will therefore analyze them only very briefly. They are presented below without quantifier, but of course should be used with a singular or plural quantifier. When S is P, it is always Q When S is P, it is never Q S is P and Q S is P and not Q When S is P, it is sometimes Q When S is P, it is sometimes not Q (The symbolic notation for temporal conditionals could be similar to that used for naturals, except with the suffixesc,tinstead ofn,p;mandaare of course identical.) Temporal conditional propositions have structures and properties very similar to their natural analogues. There is no need, therefore, to reiterate everything here, since only the modal type differs, while the categories of modality involved remain unchanged. Temporal conditionals signify that at all, this given, or some time(s), within the bounds of any, the indicated, or certain S being P, it/each is also Q (or: nonQ), as the case may be. (Similarly, it goes without saying, with a negative antecedent, nonP.) Here, ‘when’ means ‘at such times as‘. The actuals (momentaries) exist ‘at the time tacitly or explicitly under consideration’, the modals (constants or temporaries) concern a plurality of (unspecified) times. The antecedent and consequent events are actualities. The modal basis of their relationship is the temporal possibility: ‘this/those S is/are sometimes both P and Q (or: nonQ)’. The connection between them is expressed by a temporal modifier placed in the consequent; for constants, it is ‘this/those S is/are never both P and nonQ (or: Q)’, for temporaries, it is identical with the basis. The quantifier specifies the instances of S concerned. The order of sequence of the events, though often left unsaid, should be understood. Each has a relative duration, as well as location in time. Expressions like ‘while’, ‘at the same time as’, ‘before’, ‘thereafter’, ‘whenever’, are used to specify such details. b.Properties.With regard to opposition, constant conditionals (like ‘Whenever S is P, it is Q’) do not formally imply the corresponding momentaries (‘S is now P and Q’, for example), although both the former and the latter do imply temporaries (their common basis, ‘S is sometimes P and Q’, here). A constant like ‘When this S is P, it is always Q’, is contradicted by denial of either its basis or connection; that is, by saying ‘This S is never P’ or, ‘This S is sometimes both P and nonQ’. A temporary like ‘When this S is P, it is sometimes Q’, is contradicted by denying the base of either or both events; that is, by saying ‘This S is never both P and Q’. Other oppositional relations follow from these automatically, and the same may be repeated for negative events. Momentaries are identical to, and behave like, actuals, of course. The processes of translation, eduction, apodosis, syllogism, production, and dilemma, likewise all follow the same patterns for temporals as for naturals. Temporal disjunction is also very similar to natural disjunction, and its logic can be derived from that of temporal subjunction. Although temporal and natural conditionals have analogous structure and properties, each within its own system, the continuity between the two systems is here somewhat more broken than it was in the context of categoricals. In conditionals, natural necessity does not imply constancy. Compare, for instance, ‘When this S is P, it must be Q’ and ‘When this S is P, it is always Q’. Although the natural connection ‘This S cannot be P and nonQ’ implies the temporal connection ‘This S is never P and nonQ’ — the natural basis ‘this S can be P and Q’ does not imply (but is implied by) the temporal basis ‘this S is sometimes P and Q’. Since the higher connection is coupled with an inferior basis, while the lower connection is coupled with a superior basis, the ‘must’ conditional as a whole is unable to subalternate the ‘always’ version. This is easy to understand, if we remember that even within natural conditioning, ‘must be’ does not imply ‘is’; it follows that ‘must be’ cannot imply ‘is always’, which is essentially a subcategory of ‘is’ (though it too does not imply ‘is’, as already mentioned). This breach in modal continuity, in the context of conditionals, further justifies our regarding natural and temporal modal categories, as belonging to distinct systems of modality. In categorical relationships, these two types of modality differ merely in the frame of reference of their definitions (circumstances or times); but a more marked divergence between them takes shape when they are applied to conditioning. For similar reasons, natural necessity does not even imply temporariness. On the other hand, temporariness does imply potentiality, since, for instance, ‘When this S is P, it is sometimes Q’ implies ‘When this S is P, it can be Q’. Here, the categorical continuity is still operative. Also, the actualities for both types coincide: ‘in the present circumstances’ and ‘at the present time’ mean the same thing. ‘Circumstances’ refers to the existential layout of the world, how all the substantial causes are positioned in the dimensions of space; while ‘time’ focuses on the positioning of these various circumstances along the dimension of time; at any given present, these two aspects of a single happening are bound to correspond, like two sides of the same coin. These first principles allow us to work out the valid processes which correlate natural and temporal conditionals in detail. I will not explore deductive arguments which mix natural and temporal modalities, in any great detail, but only enough to make the reader aware of their existence. In syllogism, we should note valid arguments such as the following (which follow from1/naaby exposition): When this S is M, it must be Q (or: cannot be Q) When this S is P, it is always M so, When this S is P, it is always Q (or: is never Q). When this S is M, it must be Q (or: cannot be Q) When this S is P, it is sometimes M so, When this S is P, it is sometimes Q (or: nonQ). However, an argument like the following would be invalid, because there is no guarantee that the circumstances for this S to be P are compatible with those for it to be Q (or, nonQ, as the case may When this S is M, it is always Q (or: is never Q) When this S is P, it must be M so, When this S is P, it can be Q (or: nonQ). This mode is invalid, note well. Although1/ccc,1/cmmand1/cttare valid, the temporal conditionalsc,m, ortare not subalterns of the natural conditionaln. In production, modes of mixed modal type are subalterns of modes of uniform type, in accordance with the rules of categorical syllogism. This may result in compound conclusions, as in the following All P must be Q (implying, is always P) This S is sometimes P (implying, can be P) therefore, When this S is P, it must be Q (1/npn) and, When this S is P, it is always Q (1/ctc) (likewise with a negative major term.) In apodosis, mixed-type ‘modus ponens‘, like the following ones innccorntt, are valid (since they can be reduced to a number ofnaaarguments): When this S is P, it must be Q (or: nonQ) and This S is sometimes, or always, P hence, This S is sometimes or always Q (or: nonQ). And also, note well, mixed-type ‘modus tollens‘, like the following ones innccorntt, are valid (since they can be reduced to a number ofnaaarguments): When this S is P, it must be Q (or: nonQ) and This S is sometimes not, or never, Q (or: nonQ) hence, This S is sometimes not, or never, P. This result is interesting, if we remember that the arguments below are not valid, since they involve inconsistent premises (the minor contradicts a base of the major): When this S is P, it must be Q (or: nonQ) and This S cannot be Q (or: nonQ) hence, This S cannot be P. When this S is P, it always be Q (or: nonQ) and This S is never Q (or: nonQ) hence, This S is never P. Additionally, note, a constant major premise coupled with a naturally necessary minor premise, yield a conclusion, granting that for categoricalsnimpliesc. Thus,cncis valid, as a subaltern ofccc. But sincecccis invalid in cases of denial of the consequent,cnconly applies to cases of affirmation of the antecedent: When this S is P, it is always Q (or: is never Q) and This S must be P (implying, is always P) hence, This S is always Q (or: is never Q). We can similarly investigate disjunctive arguments of mixed modal type, and dilemma. Avi Sion2023-01-05T13:46:02+02:00
{"url":"https://thelogician.net/FUTURE-LOGIC/Temporal-Conditionals-38.htm","timestamp":"2024-11-03T07:29:42Z","content_type":"text/html","content_length":"126703","record_id":"<urn:uuid:f9e3ae6a-ed70-4f69-8099-ac0d9aca3648>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00841.warc.gz"}
Next: MAXIMUM INDEPENDENT SEQUENCE Up: Subgraphs and Supergraphs Previous: MAXIMUM CLIQUE &nbsp Index • INSTANCE: Graph • SOLUTION: An independent set of vertices, i.e. a subset V' are joined by an edge in E. • MEASURE: Cardinality of the independent set, i.e., • Good News: See MAXIMUM CLIQUE. • Bad News: See MAXIMUM CLIQUE. • Comment: The same problem as MAXIMUM CLIQUE on the complementary graph. Admits a PTAS for planar graphs [53] and for unit disk graphs [264]. The case of degree bounded by PX-complete [393] and [ 75], is not approximable within 12], and not approximable within 1.0005 for B=3, 1.0018 for 76]. It is approximable within 75] for small 290,14,460]. Also approximable on sparse graphs within d is the average degree of the graph [224]. Approximable within k+1-claw free graphs [222]. The vertex weighted version is approximable within 3/2 for 248], within 218], and within 224]. The related problem in hypergraphs is approximable within 224], also in the weighted case. MAXIMUM INDEPENDENT SET OF K-GONS, the variation in which the number of pairwise independent k-gons (cycles of size k, k-gons are independent if any edge connecting vertices from different k-gons must belong to at least one of these k-gons, is not approximable within 4/3 for any PX for any 457]. • Garey and Johnson: GT20 Next: MAXIMUM INDEPENDENT SEQUENCE Up: Subgraphs and Supergraphs Previous: MAXIMUM CLIQUE &nbsp Index Viggo Kann
{"url":"https://www.csc.kth.se/~viggo/wwwcompendium/node34.html","timestamp":"2024-11-11T08:07:39Z","content_type":"text/html","content_length":"9241","record_id":"<urn:uuid:1ef8b4b3-5b2b-4498-b121-4fa983186688>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00082.warc.gz"}
LiDAR Measurement Analysis in Range Domain [ Article ] JOURNAL OF SENSOR SCIENCE AND TECHNOLOGY - Vol. 33, No. 4, pp.187-195 ISSN: 1225-5475 (Print) 2093-7563 (Online) Print publication date 31 Jul 2024 Received 27 Jun 2024 Revised 05 Jul 2024 Accepted 19 Jul 2024 LiDAR Measurement Analysis in Range Domain 1Department of Mechancal and System Desgin Engineering, Hongik Unversity 94, Wausan-ro, Mapo-gu, Seoul, 04066, Korea This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License( ) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. Light detection and ranging (LiDAR), a widely used sensor in mobile robots and autonomous vehicles, has its most important function as measuring the range of objects in three-dimensional space and generating point clouds. These point clouds consist of the coordinates of each reflection point and can be used for various tasks, such as obstacle detection and environment recognition. However, several processing steps are required, such as three-dimensional modeling, mesh generation, and rendering. Efficient data processing is crucial because LiDAR provides a large number of real-time measurements with high sampling frequencies. Despite the rapid development of controller computational power, simplifying the computational algorithm is still necessary. This paper presents a method for estimating the presence of curbs, humps, and ground tilt using range measurements from a single horizontal or vertical scan instead of point clouds. These features can be obtained by data segmentation based on linearization. The effectiveness of the proposed algorithm was verified by experiments in various environments. LiDAR, Point cloud, Range measurement, Linear regression, Segmentation Light detection and ranging (LiDAR) is a sensor that uses laser light to measure distance in three-dimensional (3D) space and generate point clouds, a collection of points in 3D space. While point clouds provide dense and detailed information on 3D geometry, their irregular distribution necessitates efficient processing and analysis. Point clouds are widely used in autonomous vehicles for obstacle detection, 3D modeling during construction, and terrain/crop information collection in agriculture. Sensors such as LiDAR and cameras mounted on autonomous vehicles continuously collect and update 3D point-cloud data of the surroundings to reflect dynamic environmental changes. This real-time monitoring of the environment is crucial for path planning. Based on obstacle information, the vehicle avoids collisions, determines the driving area, and plans a safe path. Real-time LiDAR information is essential in unknown environments. Delivery robots, which are already in use, require curb and hump recognition capabilities not only in outdoor environments, such as driveways, sidewalks, and crosswalks, but also indoors. Similarly, agricultural or field robots operating in unstructured environments, such as fields, need to recognize unpaved ground conditions. Research has explored fusing inertial navigation and GPS information with LiDAR to generate road conditions and maps [1]. Other studies have focused on using these measurements for autonomous driving purposes: determining paved and unpaved surfaces based on intensity dispersion of the LiDAR laser beam [2], measuring the reflectivity of roads covered with dirt, cement, grass, and asphalt over a range to derive a correction formula for improving the use of reflectivity [3], and applying the RANSAC algorithm and adaptive thresholding to LiDAR point clouds for lane and curb detection [4]. Pothole detection using range data has also been explored [5], with flat ground and potholes being recognized by changes in range. Balancing data processing speed and accuracy to reduce computation time when determining road boundaries remains a key research topic [6]. Sensor fusion using cameras with LiDAR for 3D map updating [7] and object detection [8] is an additional area of investigation. Furthermore, research has been conducted on recognizing water puddles on the ground based on LiDAR intensity values [9]. This study proposes a method for recognizing ground conditions using only range data (instead of point clouds), focusing on range value changes according to the scan order of horizontal and vertical planes. By performing group segmentation through linear regression of the range values, curbs, humps, and ground tilt can be recognized based on the features of each group obtained from the linear regression results. The method can also be used to estimate the degree of ground tilt. Section 2 presents the relationship between LiDAR coordinates and horizontal and vertical indices. It also explains the fundamentals of the proposed method for an ideal environment. Section 3 describes the linear regression theory and its application in group segmentation. Section 4 analyzes the experimental results obtained in a real environment with humps and curbs to verify the validity of the method. In addition, it verifies the ground characteristics estimated by processing field-collected experimental data. LiDAR scans portions of vertical and horizontal planes to obtain a range of measurements. In the case of Ouster OS1 used in experiments, the horizontal plane is divided into 1024 measurements and the vertical plane over a range of 45° is divided into 64. At a frequency of 10 Hz, a single complete scan outputs 64 × 1024 range measurements, with each measurement denoted by R(vi,hi). The coordinate axis and index settings are shown in Fig. 1. The horizontal plane uses the −X direction as the reference direction, where the horizontal index hi is 1. The −Z-axis rotation direction is the direction in which the horizontal index increases. In other words, the yaw angle ψ decreases as the horizontal index increases. The resolution of the horizontal plane is Δψ = 360°/1024, and the horizontal index hi is an integer within 1 ≤ hi ≤ 1024. For the vertical plane measurement outputs, a 22.5^o angle upward (θ = -22.5^o) corresponds to a vertical index vi of 1. The index value increases in the downward direction. The direction of increase in pitch angle θ, Y-axis rotation, and direction of increase in vertical index all coincide. The resolution of the vertical plane is Δθ = 45^o/64 and the vertical index is an integer within 1 ≤ vi ≤ As depicted in Fig. 4, the coordinate axis direction of the driving robot matches the coordinate direction of the LiDAR. The coordinate system of the driving robot follows the ISO vehicle axis system. The origin of the coordinate axis was set to the origin of the LiDAR internal sensor, and the height from the ground was h = 0.56 m. Most LiDAR studies use point clouds. The range measurements are converted to Cartesian coordinates to locate the laser beam reflections. This information is then used for various tasks, such as obstacle location and map creation. The coordinates of point P(vi,hi)are obtained using the following equation: $P\left(vi,hi\right)={\left[\begin{array}{c}{P}_{x} {P}_{y} {P}_{z}\end{array}\right]}^{T}$ (1) ${P}_{x}=R\left(vi,hi\right)\text{cos}\theta \left(vi\right)\text{cos}\psi \left(hi\right)$ ${P}_{y}=R\left(vi,hi\right)\text{cos}\theta \left(vi\right)\text{sin}\psi \left(hi\right)$ ${P}_{z}=R\left(vi,hi\right)\text{sin}\theta \left(vi\right)$ After calculating the coordinates, terrain recognition processes (such as surface modeling) are performed by calculating the range between the point located in the area of interest and its four (as shown in Fig. 5) or eight neighbors. This study distinguishes itself from previous studies by interpreting changes in range measurements with respect to the vertical index in the vertical plane and the horizontal index in the horizontal plane, rather than relying on Cartesian coordinate values. This method offers several advantages compared to conventional approaches utilizing point clouds. First, it avoids calculating Eq. (1), thereby eliminating the associated computation time. Moreover, it does not calculate the 3D distance between adjacent points and avoids surface modeling processes. In conventional approaches using point clouds, each point has three coordinates. In this study, only the range values are used with respect to the vertical and horizontal indices. Additionally, each index is equally spaced, significantly simplifying the computation. Assuming that there is a perfect plane 0.56 m below the LiDAR origin, the scanned range value is calculated and plotted in Cartesian coordinates, as illustrated in Fig. 7. In other words, the position of each measurement point is expressed as a coordinate value, and the conventional method using point clouds uses this coordinate value to build surface models or features. In this case, the range is constant with respect to the yaw angle (refer to Fig. 8) when a perfect horizontal plane is assumed. This interpretation offers two significant advantages: the ability to estimate ground level from the distribution of range values and the determination of the LiDAR sensor’s tilt relative to the ground. These are obtained by applying linear regression to the measurements. If the yaw angle remains constant (e.g.,ψ = 0^o) and is calculated according to the pitch angle θ, the resulting Cartesian coordinates of the measured points are shown in Fig. 9. In this case, the range values with respect to the pitch angle θ are as depicted in Fig. 10. Fig. 11 shows the sample sets used in the analysis. Three sets were obtained by varying the horizontal index for a constant vertical index, and three sets were obtained by varying the vertical index for a constant horizontal index. The sample selection was determined by the degree of surface change, computing power of the controller, and speed of the robot. To determine the relation between the index and range, linear regression is used. The index input variable is I[i=1,⋯,N], which is either the vertical index vi or horizontal index hi, whereas the output variable is the range measurement R[i=1,⋯,N]. The relationship is assumed to be linear and is expressed as Eq. (2): $\stackrel{^}{R}={p}_{1}I+{p}_{2},$ (2) where $\stackrel{^}{R}$ is the output value from the linearized equation. Parameters p[1] and p[2] are obtained using $\underset{{p}_{1},{p}_{2}}{\text{min}}\sum _{i=1}^{N}{\left({R}_{i}-{\stackrel{^}{R}}_{i}\right)}^{2}=\underset{{p}_{1},{p}_{2}}{\text{min}}\sum _{i=1}^{N}{\left({R}_{i}-{p}_{1}{I}_{i}-{p}_{2}\ (3) To reduce the number of computations, the following three expressions are defined: ${S}_{II}=\sum _{i=1}^{N}{\left({I}_{i}-\stackrel{-}{I}\right)}^{2}=\sum _{i=1}^{N}{I}_{i}^{2}-N{\stackrel{-}{I}}^{2}$ (4) ${S}_{RR}=\sum _{i=1}^{N}{\left({R}_{i}-\stackrel{-}{R}\right)}^{2}=\sum _{i=1}^{N}{R}_{i}^{2}-N{\stackrel{-}{R}}^{2}$ (5) ${S}_{IR}=\sum _{i=1}^{N}\left({I}_{i}-\stackrel{-}{I}\right)\left({R}_{i}-\stackrel{-}{R}\right)=\sum _{i=1}^{N}{I}_{i}{R}_{i}-N\stackrel{-}{I}\stackrel{-}{R},$ (6) where $\stackrel{-}{I}=\frac{1}{N}\sum _{i=1}^{N}{I}_{i}$ and $\stackrel{-}{R}=\frac{1}{N}\sum _{i=1}^{N}{R}_{i}$. The parameters p[1] and p[2] in Eq. (2) are obtained using Eqs. (7) and (8), respectively. ${p}_{1}=\frac{{S}_{IR}}{{S}_{II}}$ (7) ${p}_{2}=\stackrel{-}{R}-{p}_{1}\stackrel{-}{I}$ (8) Depending on the data distribution, the entire dataset may not be valid for one linear equation and can be verified with the sum of squared errors. If invalid, the data must be segmented into two groups. This is achieved by finding values that classify one group of range measurements as i = 1,⋯,k-1 and the other group as i = k,⋯,N. This is an optimization problem that minimizes the following objective function: $\underset{k}{\text{min}}J=\underset{k}{\text{min}}\left(\sum _{i=1}^{k-1}{\left({R}_{i}-{\stackrel{^}{R}}_{1,i}\right)}^{2}+\sum _{i=k}^{N}{\left({R}_{i}-{\stackrel{^}{R}}_{2,i}\right)}^{2}\ (9) where ${\stackrel{^}{R}}_{1,i}$ is the output calculated using the equation after linearizing the group composed of i = 1,⋯,k-1, ${\stackrel{^}{R}}_{2,i}$ and is the output for the group consisting of i = k,⋯,N. To increase the speed of the optimization process, the following representation is used when calculating Eq. (9). $\sum {\left({R}_{i}-{\stackrel{^}{R}}_{i}\right)}^{2}={S}_{RR}-\frac{{S}_{RI}^{2}}{{S}_{II}}$ (10) Similarly, for three segments, the objective function is $J=\sum _{i=1}^{{k}_{1}-1}{\left({R}_{i}-{\stackrel{^}{R}}_{1,i}\right)}^{2}+\sum _{i={k}_{1}}^{{k}_{2}-1}{\left({R}_{i}-{\stackrel{^}{R}}_{2,i}\right)}^{2}+\sum _{i={k}_{2}}^{N}{\left({R}_{i}- (11) Generally, when the number of segments is unknown, an optimization process is used [10,11]. A penalty term that increases linearly with the number of segments is added because adding more segments always decreases the residual error. If there are K change points to be found, the function minimizes $J\left(K\right)=\sum _{r=0}^{K-1}\sum _{i={k}_{r}}^{{k}_{r+1}-1}{\left({R}_{i}-{\stackrel{^}{R}}_{i}\right)}^{2}+\beta K,$ (12) where k[0] and k[K] are the first and the last data indices, respectively. The sum of squared errors is affected by two main factors. First, it depends on the material of the surface being scanned by the LiDAR. Table 1 presents the results obtained by testing four flat surfaces: soil, brick, asphalt, and pebble. The root mean squared error for one segment is used rather than the sum of squared errors because the number of measurements can vary depending on the LiDAR resolution and range of θ. If the root mean squared error of the measurements is smaller than this value, it can be modeled as a one-segment flat surface. If the sum of the squared errors is still larger than the threshold value, the number of groups is increased by one, and a similar optimization is performed. The second factor affecting the sum of squared errors is the presence of significant differences between measurements and/or substantial surface slope changes. The number of segments and the model for each segment can be determined using Eq. (12). However, from a practical perspective, we limit the number of segments to three because of two reasons: 1) the area for immediate local path planning is close to the robot, and 2) the precision of the measurements is inversely proportional to the range. The advantages of using range measurements instead of point clouds are evident because the equation for a line in space is $\frac{x-{x}_{o}}{a}=\frac{y-{y}_{o}}{b}=\frac{z-{z}_{o}}{c}.$ (13) In our approach, finding the fitted line equation requires obtaining a solution of two parameters (p[1],p[2]) instead of six (x[o],y[o],z[o],a,b,c). 4. EXPERIMENT The predicted range measurements when the robot encounters a hump or curb is illustrated in Fig. 13. In particular, the range changes significantly around the edge of the hump. In the environment displayed in Fig. 14, as the robot traverses from the top left corner to the bottom right, some areas of the point cloud appear unmeasured, as depicted in Fig. 15. In this experiment, the LiDAR’s vertical plane measurement resolution is Δθ = 45^o/128, θ = 22.5^o and when the vertical index vi is 128. The following two measurements were performed with OS1 128CH (Ouster Inc.) installed on WeGo ST (WeGo Robotics, Co. Ltd.). The vertical scan results with respect to the vertical index are presented as circular points in Fig. 16. Using the segmentation method described in the previous section, the data were classified into two groups. The results of linearizing each group are shown by the red lines in Fig. 16. From this, we can detect the corner location and estimate the depth of the hump as the difference between the height of the last point of the left group and that of the first point of the right group. In the opposite direction of the previous case, that is, driving from the bottom right corner to the top left in the environment shown in Fig. 14, the curb is located in front of the car. For this case, the point cloud is illustrated in Fig. 17. Instead of using point-cloud information, the range values are segmented (as in the previous case) and categorized into three segments, as displayed in Fig. 18. In the figure, the left, center, and right segments represent the top surface, the wall of the curb, and the current driving surface. As expected, the range of measurements does not change significantly at the corners of the curb. The parameters for the linearization of each segment are listed in Table 2. Fig. 19 depicts the linear models for one segment and two segments. The performance indices for the three different cases are listed in Table 3, revealing that the performance decreases as the number of segments increases. The next environment is an unpaved ground, as illustrated in Fig. 20. The left side slopes uphill, while the right side slopes downhill. For this experiment, OS0 64CH from Ouster Inc. was installed on WeGo RANGER 2.0 from WeGo Robotics, Co. Ltd. A GNSS receiver, providing position with ±3 cm accuracy, was also included in the setup. The LiDAR measured the range while the robot was moving at 0.9273 m/s, a speed calculated using GNSS data. The rotation rate of the LiDAR was set to 10 Hz, which provided 64 × 1024 measurements every 0.1 s. When the region of interest was -30^o ≤ θ ≤ 30^o, the robot movement for a 60^o scan was 0.01545 m. As described in Section 2 (see Fig. 11), we use range measurement sets from vertical and horizontal scans. Fig. 21 shows the point cloud. The LiDAR used in this experiment had a vertical plane measurement resolution of Δθ = 45^o/64 and θ = 22.5^o, corresponding to a vertical index of 64. The horizontal plane measurement resolution was Δψ = 360^o/1024, with the center direction corresponding to a horizontal index of 512. Fig. 22 depicts the measurement ranges for the horizontal index from 480 to 540 (-10.5^o < ψ < 10.5^o) and vertical index from 55 to 64 (15.3^o ≤ θ ≤ 22.5^o). To interpret this numerically, we segmented the distribution of range measurements according to the horizontal index for vertical indices of 64, 60, and 55. The results are illustrated in Fig. 23. This figure quantitatively represents the change in the robot’s yaw angle (X-axis rotation) relative to the ground. It can be qualitatively determined from this figure that the range change at the left side is small while that at the right side is large. This implies a larger magnitude of ground surface tilt in the Y-axis direction on the left side and a smaller tilt on the right. Furthermore, the area with a large vertical index (i.e., the area closer to the ground) exhibits a decrease in the range measurement toward the right in the horizontal plane scan. Conversely, the area with a small vertical index shows an increase in the range measurement to the right. This indicates that the ground is higher on the left side in the near range and higher on the right side in the far range. To verify this interpretation, two points on each segment were selected (A1, A2; B1, B2; B3, B4; and C1, C2) as shown in Fig. 23. The position of each point is plotted in Cartesian coordinates in Fig. 24. Fig. 25 depicts the range measurements as the vertical index varies from 55 to 64 for three specific horizontal indices: 510 near the center, 490 to the left, and 530 to the right. All three cases represent single segments with distinct slopes along the vertical axis. In this region, there are no humps or curbs, which means that the height change on the right side is greater than that on the 5. CONCLUSIONS This study proposes a method for estimating ground characteristics using LiDAR range measurements instead of point clouds, which are more commonly used for environmental recognition. By processing the range measurements from the same horizontal scan plane or the same vertical scan plane with linear regression-based data segmentation, the method can identify curbs and humps and estimate the slope of continuous ground surfaces. Eliminating the need for processing 3D point clouds significantly reduces computational complexity and increases processing speed. This estimated ground information will significantly contribute to local path planning when driving in unstructured, unknown, or dynamic environments. This research was supported in part by the Basic Research Project of Korea Institute of Machinery and Materials (Project ID: NK242I).
{"url":"https://jsstec.org:443/_PR/view/?aidx=41415&bidx=3751","timestamp":"2024-11-08T04:24:45Z","content_type":"application/xhtml+xml","content_length":"106361","record_id":"<urn:uuid:de2d01bc-b68a-4611-9bc8-7bad7f58e999>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00877.warc.gz"}
Combinatorics and Chandas-śāstra‑2 Anaadi Foundation Ori­gins of Com­bi­na­torics in Chan­das-śās­tra The Chan­das-śās­tra has some very inter­est­ing and intri­cate con­nec­tion with math­e­mat­ics. The word chan­das means of prosody, the sci­ence of metres. It has been esti­mat­ed by schol­ars that this Chan­das-śās­tra was com­posed by Piṅ­gala-nāga around 3rd cen­tu­ry BCE, though there could be some uncer­tain­ty in his peri­od. In his Chan­das-śās­tra, Piṅ­gala intro­duces some com­bi­na­to­r­i­al tools called pratyayas which can be employed to study the var­i­ous pos­si­ble metres in San­skrit prosody. The algo­rithms pre­sent­ed by him form the ear­li­est exam­ples of use of recur­sion in Indi­an math­e­mat­ics. In the forth­com­ing arti­cles we shall delve more into greater under­stand­ing of the var­i­ous algo­rithms or pratyayas enun­ci­at­ed by Piṅ­galācārya. Pratyayas in Piṅgala’s Chan­das-śās­tra In chap­ter eight of Chan­das-śās­tra, Piṅ­gala intro­duces the fol­low­ing six pratyayas:Prastāra: A pro­ce­dure by which all the pos­si­ble met­ri­cal pat­terns with a giv­en num­ber of syl­la­bles are laid out sequen­tial­ly as an array. Saṅkhyā: The process of find­ing total num­ber of met­ri­cal pat­terns (or rows) in the prastāra. Naṣṭa: The process of find­ing for any row, with a giv­en num­ber, the cor­re­spond­ing met­ri­cal pat­tern in the prastāra. Uddiṣṭa: The process for find­ing, for any giv­en met­ri­cal pat­tern, the cor­re­spond­ing row num­ber in the prastāra. Lagakriyā: The process of find­ing the num­ber of met­ri­cal forms with a giv­en num­ber of laghus (or gurus). Adhvayo­ga: The process of find­ing the space occu­pied by the prastāra. Piṅ­galācārya presents the steps for con­struct­ing the prastāra, which is the first of the six pratyayas in four terse sūtras. The pro­ce­dure out­lined here helps in the ordered and con­sis­tent list­ing of all the pos­si­ble com­bi­na­tions of a n-syl­la­bled met­ri­cal pat­tern, which is tech­ni­cal­ly termed as varṇa-vṛt­ta. It is impor­tant to keep in mind that this con­struc­tion is for varṇa-vṛt­ta or met­ri­cal meters. The fol­low­ing sūtras cor­re­spond to the pro­ce­dure of con­struct­ing prastāra.द्विकौ ग्लौ। मिश्रौ च। पृथग्लामिश्राः। वसवास्त्रिकाः । The steps pre­sent­ed here are essen­tial­ly the fol­low­ing: 1. Form a G, L pair. Write them one below the oth­er. 2. Insert on the right Gs [in one pair] and Ls [in anoth­er] . 3. [Repeat­ing the process] we have eight (vasavaḥ ) met­ric forms in the 3‑syl­la­ble-prastāra. To illus­trate we shall con­sid­er the fol­low­ing pair:GL Now we add Gs to the right of this pair and then with the same pair add Ls to the rightGGLGGLLL The above forms the enu­mer­a­tion of a 2‑syllabled meter hence called 2‑syl­la­ble-prastāra. To form the third, we use the above set, add Gs to the right and then use the same set and add Ls to the right to obtain the 3‑syl­la­ble-prastāra. The fol­low­ing illus­tra­tion cap­tures the same.GGGLGGGLGLLGGGLLGLGLLLLL Iter­a­tive­ly, doing the same shall lead to high­er orders. As explained in the pre­vi­ous edi­tion, sub­sti­tut­ing 0 for G and 1 for L and tak­ing the mir­ror image will lead to the mod­ern bina­ry num­bers’ ordered rep­re­sen­ta­tion. There are also oth­er kind of prastāras for enu­mer­a­tion of mātrā-vṛt­ta or moric meters that are based on num­ber of beats/units, Tāna-Prastāra for enu­mer­a­tion of per­mu­ta­tions or tānas of svaras and Tāla-Prastāra for enu­mer­a­tion of tāla forms. All these can be broad­ly put under the body of devel­op­ment of com­bi­na­torics in India, which we shall lat­er see in this same series. Piṅ­gala has only briefly touched upon mātrā-vṛt­ta in Chap­ter IV of Chan­das-śās­tra while dis­cussing the var­i­ous forms of Āryā and Vaitālı̄ya vṛt­ta. The stud­ies of oth­er vṛt­tas hap­pened in lat­er peri­ods, which we shall learn in those respec­tive stages of this series. Alter­na­tive Algo­rithm ~ Prastāra In Vṛt­tarat­nākara authored by Kedāra (c.1000 CE), we find the pre­sen­ta­tion of anoth­er inge­nious algo­rithm for build­ing the n‑syl­la­ble-prastāra. पादे सर्वगुरावाद्याल्लघुं न्यस्य गुरोरधः। यथोपरि तथा शेष भूयः कुर्यादमुं विधिम्। ऊने दद्याद्गुरूनेव यावत्सर्वलघुर्भवेत्। ( वृत्तरत्नाकरम् ६.२‑३ ) Start with a row of Gs. Scan from the left to iden­ti­fy the first G. Place an L below that. The ele­ments to the right are brought down as they are. All the places to the left are filled up by Gs. Go on till a row of only Ls is reached. A great advan­tage of this algo­rithm is that the ordered sequence can be built from any giv­en instance in the list­ing. The fol­low­ing illus­tra­tion presents five suc­ces­sive rows in 4‑syllable prastāra built using the algo­rithm in Vṛt­tarat­nākara.GGGLLGGLGLGLLLGLGGLL In row 1, we have GGGL. Scan­ning from left to right, the moment we encounter a G we make it an L in the below row and retain the rest of the string, as is. Then in row 2, we have LGGL. Scan­ning from left, we iden­ti­fy the first G (sec­ond char­ac­ter), switch that to L, flip all that is there to left to Gs of that loca­tion and the char­ac­ters to the right are kept as is. Con­tin­u­ing in this fash­ion, the prastāra can be built. It is indeed enthralling to know in depth about the rich sci­en­tif­ic her­itage of the Indi­an civ­i­liza­tion. We shall con­tin­ue to learn about the oth­er algo­rithms or pratyayas enun­ci­at­ed by Piṅ­galācārya in the fol­low­ing edi­tions.
{"url":"https://www.anaadi.org/post/combinatorics-and-chandas-%C5%9B%C4%81stra-2","timestamp":"2024-11-09T04:38:27Z","content_type":"text/html","content_length":"1050478","record_id":"<urn:uuid:7c0fa583-3098-49ff-928d-3d98e8b43563>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00862.warc.gz"}
Download Collected Papers on Wave Mechanics (Second Edition) by Erwin Schrödinger PDF Read or Download Collected Papers on Wave Mechanics (Second Edition) PDF Similar mechanics books Mechanics of Hydraulic Fracturing (2nd Edition) Revised to incorporate present parts thought of for today’s unconventional and multi-fracture grids, Mechanics of Hydraulic Fracturing, moment version explains some of the most very important gains for fracture layout — the facility to foretell the geometry and features of the hydraulically triggered fracture. Partial differential equations of mathematical physics Harry Bateman (1882-1946) was once an esteemed mathematician relatively recognized for his paintings on particular services and partial differential equations. This e-book, first released in 1932, has been reprinted again and again and is a vintage instance of Bateman's paintings. Partial Differential Equations of Mathematical Physics was once constructed mainly with the purpose of acquiring particular analytical expressions for the answer of the boundary difficulties of mathematical physics. Relocating lots on Ice Plates is a different examine into the impression of autos and plane vacationing throughout floating ice sheets. It synthesizes in one quantity, with a coherent topic and nomenclature, the varied literature at the subject, hitherto on hand purely as examine magazine articles. Chapters at the nature of clean water ice and sea ice, and on utilized continuum mechanics are incorporated, as is a bankruptcy at the subject's venerable heritage in comparable components of engineering and technological know-how. This quantity constitutes the lawsuits of a satellite tv for pc symposium of the XXXth congress of the overseas Union of Physiological Sciences. The symposium has been held In Banff, Alberta Canada July 9/11 1986. this system was once equipped to supply a selective evaluate of present advancements in cardiac biophysics, biochemistry, and body structure. Additional resources for Collected Papers on Wave Mechanics (Second Edition) Example text BlatzÀKo ¼ ! ! Gel f 1 À 2ν À2ν Gel ðf À 1Þ I 2 1 À 2ν 2ν I1 À 3 þ J J À 3 þ À1 þ À 1 2 ν 1 À 2ν 2 ν 1 À 2ν J2 ! 1 ∂ F 2 ψ ¼ FT σ BlatzÀKo ij J ∂C BlatzÀKo ð3:3Þ ð3:4Þ The viscoelastic stress modeled by Li and Lau [Eq. 5)] was selected as it has shown to produce reasonable results for polymers under high-rate loadings [1, 3]: σ visco 8t 9 " # ð < = 6 X 1 ÀðtÀτÞ=T _ ðτÞdτ FT i E ¼ F ½ A1 þ A 2 ð I 2 À 3Þ Gi e ; J : i¼1 ð3:5Þ 0 The convolution integral was solved either through direct numerical integration or a state variables approach [2, 10, 11]. 5] numerical results. 38 R. Hall et al. Fig. 4 Fluid and solid kinematic and force quantities along the domain at the end of 100 h. (a) Solid density along the domain, (b) fluid density along the domain, (c) fluid stress along the domain, (d) interactive force along the domain Tandon et al. [5] studied the oxidation layer growth via diffusion reaction equation assuming an ideal fluid permeating through a rigid solid. Accordingly, in their model the deformation of the solid and viscous effects in the fluid are neglected. In this section, we present numerical results for the oxidation behavior of polyimide PMR15 resin based on the oxidation reaction model developed in the works of Tandon et al. [5]. For the sake of completeness, we provide a brief description of the oxidation process in polymer. However, for a detailed description of the oxidation process and the reaction kinetics model, refer to [5, 12]. Oxidation front in polymer materials advances through a combination of diffusion and reaction mechanism. The exposed surface reacts with the diffusing air, depleting the amount of polymer 4 Diffusion of Chemically Reacting Fluids through Nonlinear Elastic Solids and 1D Stabilized Solutions 37 Fig. Rated of 5 – based on votes
{"url":"http://blog.reino.co.jp/index.php/ebooks/collected-papers-on-wave-mechanics-second-edition","timestamp":"2024-11-05T07:39:30Z","content_type":"text/html","content_length":"38051","record_id":"<urn:uuid:5b0430a0-b60a-4ead-89e9-1393b476fbd2>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00129.warc.gz"}
Maths & ML Gems This is a list of wonderful papers in machine learning, reflecting my own tastes and interests. • Least Squares Quantization in PCM by Stuart P Lloyd (1982 but he got the method twenty years earlier). Definition of Lloyd's algorithm for k-means clustering. • The James-Stein paradox in estimation by Jamest and Stein, 1961. Sometimes, Maximum-likelihood is not the best estimator, even in a L2 world. • Generalized Procrustes analysis by Gower (1975) • Universal approximation theorem by Cybenko (1989) • Compressed sensing paper by Candès, Romberg, and Tao (2006). • Scale Mixtures of Gaussians and the Stastistics of Natural Images by Wainwright and Simoncelli (1999). • Probabilistic PCA by Tipping and Bishop (1999). Lightweight generative model. • Annealed importance sampling by Radford Neal (2001). • A computational approach to edge detection by John Canny (1986) • The Sparse PCA paper by Zou, Hastie, Tibshirani (2006). • The BBP transition by Baik, Ben Arous, and Péché (2004). Probably the most important paper in random matrix theory. • On spectral clustering by Ng, Jordan and Weiss (2001). • Hyvärinen's Score Matching paper in 2005. • Exact Matrix Completion via Convex Optimization by Candes and Recht (2009). Matrix completion via optimization. • Matrix completion from a few entries by Keshavan, Montanari and Oh (2009). Matrix completion from SVD tresholding is (was ?) the go-to method for sparse matric completion. • Adaptive mixtures of experts by Jacobs et al. Introduces the famous MoE method. • The NTK paper by Jacot, Gabriel and Hongler (2018). • Density estimation by dual ascent of the log-likelihood by Tabak and Vanden-Eijnden (2010), first definition of coupling layers for normalizing flows. • Implicit regularization in deep networks by Martin and Mahonney (2021). On the training dynamics of the hessian spectrum of DNNs. • Edge of Stability paper by Cohen et al. • A U-turn on double descent by Curth et al. • Emergence of scaling in random networks, the original paper by Barabasi and Albert (1999) • Error in high-dimensional GLMs by Barbier et al. (2018) • Spectral algorithms for clustering by Nadakuditi and Newman, .from an RMT perspective • Spectral redemption in clustering sparse networks by Krzkala et al. (2013): classical versions of spectral clustering are failing for sparse graphs, but the authors show that a simple modification of the Laplacian matrix can lead to a successful clustering. • On Estimation of a Probability Density Function and Mode, the famous kernel density estimation paper by Parzen (1962) • Power laws, Pareto distributions and Zipf’s law, the survey by Newman on heavy-tails • ISOMAP, nonlinear dimensionality reduction for manifold learning. • t-SNE, the paper introducing the t-SNE dimension reduction technique, by van der Maaten and Hinton (2008) • Best subset or Lasso, by Hastie, Tibshirani and Friedman (2017) • Smoothing by spline functions, one of the seminal papers on spline smoothing, by Reinsh (1967) • Spline smoothing is almost kernel smoothing, a striking paper by Silverman (1984), and its generalization by Ong, Milanfar and Getreuer (2019). Global optimization problems (such as interpolation) can be approximated by local operations (kernel smoothing). • Tweediess formula and selection bias, a landmark paper by Bradley Efron. Tweedie's formula is key to many techniques in statistics, including diffusion-based generative models. • The ADAM optimizer by Kingma and Ba (2014). • The BatchNorm paper by Ioffe and Szegedy (2015). • The LayerNorm paper by Ba et al. (2016). • The Dropout paper by Srivastava et al. (2014). • The AlexNet paper by Krizhevsky, Sutskever, and Hinton (2012). • Normalizing flows by Rezende and Mohamed (2015). They're not so popular now, but the paper is really a gem. • Invariant and equivariant graph networks by Maron et al. (2019). They compute the dimension of invariant and equivariant linear layers and study GNN expressivity. • The original paper introducing generative diffusion models, by Sohl-Dickstein et al (2015) • The second paper of diffusions by Song et al (2020) • The Stable Diffusion paper by Rombach et al (2021) • The Neural ODE paper by Chen et al. (2018) • Attention is all you need, 2017. This paper changed the world. • https://arxiv.org/abs/2104.09864, a killer method. • Flow matching by Lipman et al, 2022, the most elegant generalization of diffusion models. • The data-driven Schrödinger bridge by Pavon, Tabak and Trigila (2021) • Language models are few-shot learners on LLM scaling laws • The Wasserstein GAN paper by Arzovsky, Chintala and Bottou (2017) • YOLO, now at its 11th version! • Deep learning for symbolic mathematics by Lample and Charton (2019) • The Convmixer paper: fitting a big convolutional network in a tweet. • An image is worth 16x16 words, the original Vision Transformer paper by Dosovitskiy et al. (2020). The paper that started the revolution of transformers in computer vision. • Image Segmentation as rendering • Per-Pixel Classification is Not All You Need for Semantic Segmentation • Segment Anything, the original paper on segmentation by Kirillov et al. (2023) which really pushed the field forward.
{"url":"https://scoste.fr/posts/papers/","timestamp":"2024-11-15T03:20:19Z","content_type":"text/html","content_length":"13622","record_id":"<urn:uuid:ebbae935-8f6e-402a-9b05-86edf55c68d4>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00054.warc.gz"}
Nautical miles (International) to Fingerbreadth Converter Enter Nautical miles (International) β Switch toFingerbreadth to Nautical miles (International) Converter How to use this Nautical miles (International) to Fingerbreadth Converter π € Follow these steps to convert given length from the units of Nautical miles (International) to the units of Fingerbreadth. 1. Enter the input Nautical miles (International) value in the text field. 2. The calculator converts the given Nautical miles (International) into Fingerbreadth in realtime β using the conversion formula, and displays under the Fingerbreadth label. You do not need to click any button. If the input changes, Fingerbreadth value is re-calculated, just like that. 3. You may copy the resulting Fingerbreadth value using the Copy button. 4. To view a detailed step by step calculation of the conversion, click on the View Calculation button. 5. You can also reset the input by clicking on button present below the input field. What is the Formula to convert Nautical miles (International) to Fingerbreadth? The formula to convert given length from Nautical miles (International) to Fingerbreadth is: Length[(Fingerbreadth)] = Length[(Nautical miles (International))] / 0.000010286177040041145 Substitute the given value of length in nautical miles (international), i.e., Length[(Nautical miles (International))] in the above formula and simplify the right-hand side value. The resulting value is the length in fingerbreadth, i.e., Length[(Fingerbreadth)]. Calculation will be done after you enter a valid input. Consider that a luxury yacht cruises 120 nautical miles on a journey. Convert this distance from nautical miles to Fingerbreadth. The length in nautical miles (international) is: Length[(Nautical miles (International))] = 120 The formula to convert length from nautical miles (international) to fingerbreadth is: Length[(Fingerbreadth)] = Length[(Nautical miles (International))] / 0.000010286177040041145 Substitute given weight Length[(Nautical miles (International))] = 120 in the above formula. Length[(Fingerbreadth)] = 120 / 0.000010286177040041145 Length[(Fingerbreadth)] = 11666141.8069 Final Answer: Therefore, 120 nmi is equal to 11666141.8069 fingerbreadth. The length is 11666141.8069 fingerbreadth, in fingerbreadth. Consider that an aircraft travels 500 nautical miles to reach its destination. Convert this distance from nautical miles to Fingerbreadth. The length in nautical miles (international) is: Length[(Nautical miles (International))] = 500 The formula to convert length from nautical miles (international) to fingerbreadth is: Length[(Fingerbreadth)] = Length[(Nautical miles (International))] / 0.000010286177040041145 Substitute given weight Length[(Nautical miles (International))] = 500 in the above formula. Length[(Fingerbreadth)] = 500 / 0.000010286177040041145 Length[(Fingerbreadth)] = 48608924.1954 Final Answer: Therefore, 500 nmi is equal to 48608924.1954 fingerbreadth. The length is 48608924.1954 fingerbreadth, in fingerbreadth. Nautical miles (International) to Fingerbreadth Conversion Table The following table gives some of the most used conversions from Nautical miles (International) to Fingerbreadth. Nautical miles (International) (nmi) Fingerbreadth (fingerbreadth) 0 nmi 0 fingerbreadth 1 nmi 97217.8484 fingerbreadth 2 nmi 194435.6968 fingerbreadth 3 nmi 291653.5452 fingerbreadth 4 nmi 388871.3936 fingerbreadth 5 nmi 486089.242 fingerbreadth 6 nmi 583307.0903 fingerbreadth 7 nmi 680524.9387 fingerbreadth 8 nmi 777742.7871 fingerbreadth 9 nmi 874960.6355 fingerbreadth 10 nmi 972178.4839 fingerbreadth 20 nmi 1944356.9678 fingerbreadth 50 nmi 4860892.4195 fingerbreadth 100 nmi 9721784.8391 fingerbreadth 1000 nmi 97217848.3908 fingerbreadth 10000 nmi 972178483.9083 fingerbreadth 100000 nmi 9721784839.0834 fingerbreadth Nautical miles (International) A nautical mile (international) is a unit of length used in maritime and aviation contexts. One nautical mile is equivalent to 1,852 meters or approximately 1.15078 miles. The nautical mile is defined based on the Earth's circumference and is equal to one minute of latitude. Nautical miles are used worldwide for navigation at sea and in the air. They are particularly important for charting courses and distances in maritime and aviation industries, ensuring consistency and accuracy in navigation. A fingerbreadth is a historical unit of length based on the width of a person's finger. One fingerbreadth is approximately equivalent to 1 inch or about 0.0254 meters. The fingerbreadth is defined as the width of a finger at its widest point, typically used for practical measurements in various contexts such as textiles and small dimensions. Fingerbreadths were used in historical measurement systems to provide a simple and accessible means of measuring smaller lengths and dimensions. While not commonly used today, the unit offers insight into traditional measurement practices and standards. Frequently Asked Questions (FAQs) 1. What is the formula for converting Nautical miles (International) to Fingerbreadth in Length? The formula to convert Nautical miles (International) to Fingerbreadth in Length is: Nautical miles (International) / 0.000010286177040041145 2. Is this tool free or paid? This Length conversion tool, which converts Nautical miles (International) to Fingerbreadth, is completely free to use. 3. How do I convert Length from Nautical miles (International) to Fingerbreadth? To convert Length from Nautical miles (International) to Fingerbreadth, you can use the following formula: Nautical miles (International) / 0.000010286177040041145 For example, if you have a value in Nautical miles (International), you substitute that value in place of Nautical miles (International) in the above formula, and solve the mathematical expression to get the equivalent value in Fingerbreadth.
{"url":"https://convertonline.org/unit/?convert=nautical_miles-fingerbreadth","timestamp":"2024-11-03T20:30:51Z","content_type":"text/html","content_length":"93110","record_id":"<urn:uuid:37f17fe8-ffbc-41c6-b78c-57b661665c85>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00237.warc.gz"}
A catamaran is a multi-Hulled watercraft featuring two parallel hulls of equal size. It is a geometry-stabilized craft, deriving its stability from its wide beam, rather than from a ballasted keel as with a monohull sailboat. Catamarans typically have less hull volume, higher displacement, and shallower draft (draught) than monohulls of comparable length. The two hulls combined also often have a smaller hydrodynamic resistance than comparable monohulls, requiring less propulsive power from either sails or motors. The catamaran's wider stance on the water can reduce both heeling and wave-induced motion, as compared with a monohull, and can give reduced wakes. Catamarans range in size from small (sailing or rowing vessels) to large (naval ships and car ferries). The structure connecting a catamaran's two hulls ranges from a simple frame strung with webbing to support the crew to a bridging superstructure incorporating extensive cabin and/or cargo space. File Size: 101 kb File Type: pdf Download File INSEL & MOLLAND’S METHOD FOR CATAMARAN RESISTANCE PREDICTION (1992) The paper by Insel and Molland (1992) summarizes a calm water resistance investigation into high speed semi-displacement catamarans, with symmetrical hull forms based on experimental work carried out at the University of Southampton. Two interference effects contributing to the total resistance effect were established, being viscous interference, caused by asymmetric flow around the demihulls which effects the boundary layer formation, and wave interference, due to the interaction of the wave systems produced by each demihull. All models were tested over a range of Froude numbers of 0.1 to 1.0 in the demi-hull configuration and catamaran configuration with separation ratios, s/L, of 0.2, 0.3, 0.4 and 0.5. Calm water resistance, running trim, sinkage and wave pattern analysis experiments were carried out. The authors conclude that the form factor, for practical purposes, is independent of speed and should thus be kept constant over the speed range. This was a good practical solution to a complex engineering problem at that point in time. The authors also conclude that the viscous interference factor γ is effectively independent of speed and should be kept constant across the speed range and it depends primarily on L/B ratio. The authors further conclude that: • The vessels tested have an appreciable viscous form effect, and are higher for catamarans where viscous interference takes place between the hulls. • Viscous resistance interference was found to be independent of speed and hull separation, and rather is dependent on demi-hull length to beam ratio. • Generally higher hull separation ratios result in smaller wave interference, with beneficial wave interference between Froude numbers of 0.35 to 0.42. • Catamarans display higher trim angles than mono-hulls, and that the trim angle is reduced with increasing hull separation ratios. • A ship to model correlation exercise is required for the extrapolation techniques presented to be validated. The catamaran or twin-hull concept has been employed in high-speed craft design for several decades, and both sailing as well as powered catamarans are in use. For commercial purposes semiplaning type catamarans are predominant. The component hulls (demihulls) are of the planing type, featuring V-type sections and a cut-off transom stern.The division of displacement and waterplane area between two relatively slender hulls results in a large deck area, good stability qualities and consequently a small rate and angle of roll. Active control of pitching motions by means of fins may eliminate this problem. The resistance of a catamaran is mainly affected by the wetted surface ratio the slenderness ratio and the hull spacing (s/L). The wetted surface ratio is relatively high compared with planing monohulls of the same displacement. Consequently, catamarans show poor performance at low speeds (Fn 0.35) where skin friction is predominant. At higher speeds (in the hump region, the low trim angles associated with the slender demihulls of the catamaran lead to a favorable performance . At planing speeds (Froude numbers around 1.0) the equivalent monohull (of equal displacement) will show an advantage, as the hydrodynamic performance decreases with decreasing aspect ratio (the ratio of the wetted breadth of the demihull to its length. He found that the catamaran had less resistance at speeds in excess of the catamaran had some 30 percent less resistance, this reduction increasing to about 45 percent at = 7.0. This advantage is due to the fact that at such high speeds the conventional boat is operating at a very small trim angle and high resistance, while the catamaran operates at a higher trim angle nearer to that for minimum resistance. An indication of the relative performance of catamarans and planing vessels is given in Fig. 97. The hull spacing ratio is associated with interference effects between the component hulls. These effects consist of wave interference effects and body interference effects. Wave interference effects are due to the superposition of the two wave systems, each associated with a component hull in isolation. The body interference effects are caused by the change of flow around one demihull due to the presence of the other demihull. Several studies on interference effects on resistance have been undertaken, e.g. Fry et al (1972), Sherman, et al (1975), Yermotayev, et al (1977) and Ozawa, et al (1977). The main component of the changed velocity field associated with body interference effects results from the induced flow of one demihull at the location of the other one. This induced flow is due partly to thickness effects and partly to lift effects. Consequently, the resuiting flow around a symmetrical demihull will be composed of a symmetrical and an asymmetrical part. Vollheim (1968) and Myazawa (1979) have carried out velocity studies by means of pressure measurements. These results referred to a displacement type of catamaran with symmetrical demihulls. Myazawa found an increase of the mean velocity both between the demihulls and on the outer sides. He also concluded that the asymmetrical contribution to the local velocity field was small. His results apparently do not agree with those of Vollheim, however. The asymmetrical onflow of one demihull and the possibly asymmetrical shape of that demihull will lead to hydrodynamic lift forces. On account of the finite aspect ratio, trailing vortices are shed leading to induced velocities around the other hull. This effect is believed to be of less importance. The wave interference may influence the resistance to a large extent. Everest (1968) showed from a wave pattern analysis that beneficial wave interference is achieved by the cancellation of part of the divergent wave systems of each demihull, whereas adverse wave interference arises on interaction of the transverse wave system. Fig. 98 shows the influence of the wave interference effects on the resistance obtained by Tasaki (1962) for a mathematical hull form. Here the wave where is the wave pattern resistance of the catamaran and is the wave pattern resistance of one demihull. In general, experiments confirm this behavior but smaller beneficial and adverse effects occur (Everest, 1968). Theoretical and experimental evidence for symmetrical demihulls indicates that wave interference becomes significant at Fn-values of 0.2. Maximum beneficial effects occur at Fn 0.32 and whereas adverse effects are most pronounced around Fn = 0.4. For asymmetrical demihulls, Everest (1969) and Turner, et al (1968) have made measurements. The generation of vertical hydrodynamic lift and the associated change of hull form because of trim and rise of the center of gravity, may have a significant effect on the interference effects. Therefore, for the semi-planing speed range other tendencies may be expected, see Fig. 99. Fry, et al (1972) show model test results from which it may be concluded that the interference effects are small at speeds exceeding Fn = 0.8. Only for small hull spacings do the effects still seem to be significant. These conclusions are in contradiction with those of Sherman, et al (1975) for which there is no satisfactory explanation presently. For measurements of planing catamarans with asymmetrical demihulls the work of Ozawa, et al (1977) and Sherman, et al (1975) may be mentioned. Design charts for planing catamarans have been published by Clement (1961). These are based on model tests. Application of these design charts is restricted to: • low-aspect-ratio hulls, i.e., 0.1 AR 0.3, • small deadrise angles, i.e., 0 deg 10 deg., • high planing speeds, where buoyant forces are small. Furthermore, the effects of interference between the hulls and of spray on the tunnel roof were not included. Sherman, et al (1975) modified Savitsky's (1964) planing performance prediction method for catamarans. The program does not include interference effects on drag and trim. Resistance due to spray interfering with the tunnel roof is again not included.
{"url":"https://www.mermaid-consultants.com/resistance-prediction-round-bilge-catamaran-molland.html","timestamp":"2024-11-04T17:01:04Z","content_type":"text/html","content_length":"100404","record_id":"<urn:uuid:6c6af05c-6c36-4405-ad2b-cb8b0dc3b10d>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00159.warc.gz"}
In addition to the default and copy constructors, the following static methods should be available for constructing sites: ss.construct_storage_site_2 ( Point_handle hp) Constructs a storage site from a point handle. The storage site represents the point associated with the point handle hp. ss.construct_storage_site_2 ( Point_handle hp1, Point_handle hp2) Constructs a storage site from two point handles. The storage site represents the segment the endpoints of which are the points associated with the point handles hp1 and hp2. ss.construct_storage_site_2 ( Point_handle hp1, Point_handle hp2, Point_handle hq1, Point_handle hq2) Constructs a storage site from four point handles. The storage site represents the point of intersection of the segments the endpoints of which are the points associated with the point handles hp1, hp2 and hq1 and hq2, respectively. ss.construct_storage_site_2 ( Point_handle hp1, Point_handle hp2, Point_handle hq1, Point_handle hq2, bool b) Constructs a site from four point handles and a boolean. The storage site represents a segment. If b is true, the first endpoint of the segment is the point associated with the handle hp1 and the second endpoint is the point of intersection of the segments the endpoints of which are the point associated with the point handles hp1, hp2 and hq1, hq2, respectively. If b is false, the first endpoint of the represented segment is the one mentioned above, whereas the second endpoint if the point associated with the point handle hp2. ss.construct_storage_site_2 ( Point_handle hp1, Point_handle hp2, Point_handle hq1, Point_handle hq2, Point_handle hr1, Point_handle hr2) Constructs a storage site from six point handles. The storage site represents of segment the endpoints of which are points of intersection of two pairs of segments, the endpoints of which are hp1, hp2/hq1, hq2 and hp1, hp2/hr1, hr2, respectively. bool ss.is_defined () Returns true if the storage site represents a valid point or segment. bool ss.is_point () Returns true if the storage site represents a point. bool ss.is_segment () Returns true if the storage site represents a segment. bool ss.is_input () Returns true if the storage site represents an input point or a segment defined by two input points. Returns false if it represents a point of intersection of two segments, or if it represents a segment, at least one endpoint of which is a point of intersection of two segments. bool ss.is_input ( unsigned int i) Returns true if the i-th endpoint of the corresponding site is an input point. Returns false if the i-th endpoint of the corresponding site is the intersection of two segments. Precondition: i must be at most $1$, and ss.is_segment() must be true. ss.supporting_site () Returns a storage site object representing the segment that supports the segment represented by the storage site. The returned storage site represents a site, both endpoints of which are input points. Precondition: ss.is_segment() must be true. ss.source_site () Returns a storage site that represents the first endpoint of the represented segment. Precondition: ss.is_segment() must be true. ss.target_site () Returns a storage site that represents the second endpoint of the represented segment. Precondition: ss.is_segment() must be true. ss.supporting_site ( unsigned int i) Returns a storage site object representing the i-th segment that supports the point of intersection represented by the storage site. The returned storage site represents a site, both endpoints of which are input points. Precondition: i must be at most $1$, ss.is_point() must be true and ss.is_input() must be false. ss.crossing_site ( unsigned int i) Returns a storage site object representing the i-th segment that supports the $i$-th endpoint of the site which is not the supporting segment of the site. The returned storage site represents a site, both endpoints of which are input points. Precondition: i must be at most $1$, ss.is_segment() must be true and ss.is_input(i) must be false. Site_2 ss.site () Returns the site represented by the storage site. Point_handle ss.point () Returns a handle associated with the represented point. Precondition: is_point() and is_input() must both be true. Point_handle ss.source_of_supporting_site () Returns a handle to the source point of the supporting site of the this site. Precondition: is_segment() must be true. Point_handle ss.target_of_supporting_site () Returns a handle to the target point of the supporting site of the this site. Precondition: is_segment() must be true. Point_handle ss.source_of_supporting_site ( unsigned int i) Returns a handle to the source point of the i-th supporting site of the this site. Precondition: is_point() must be true, is_input() must be false and i must either be 0 or 1. Point_handle ss.target_of_supporting_site ( unsigned int i) Returns a handle to the target point of the i-th supporting site of the this site. Precondition: is_point() must be true, is_input() must be false and i must either be 0 or 1. Point_handle ss.source_of_crossing_site ( unsigned int i) Returns a handle to the source point of the i-th crossing site of the this site. Precondition: is_segment() must be true, is_input(i) must be false and i must either be 0 or 1. Point_handle ss.target_of_crossing_site ( unsigned int i) Returns a handle to the target point of the i-th supporting site of the this site. Precondition: is_segment() must be true, is_input(i) must be false and i must either be 0 or 1.
{"url":"https://doc.cgal.org/Manual/3.2/doc_html/cgal_manual/Segment_Delaunay_graph_2_ref/Concept_SegmentDelaunayGraphStorageSite_2.html","timestamp":"2024-11-12T15:13:13Z","content_type":"text/html","content_length":"24106","record_id":"<urn:uuid:d05b7a95-916e-428a-81a5-e1702646fd32>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00556.warc.gz"}
How to Solve And Plot A Cubic Equation In Matlab? To solve and plot a cubic equation in MATLAB, you can follow these steps: 1. Define the equation: Start by defining the cubic equation using symbolic variables. For example, let's say your cubic equation is "ax^3 + bx^2 + cx + d = 0". Use the syms function to define symbols for the coefficients 'a', 'b', 'c', and 'd'. syms x a b c d equation = a*x^3 + b*x^2 + c*x + d; 2. Find the roots: Use the solve function to find the roots of the equation. Specify 'x' as the variable and the equation to solve for. roots = solve(equation, x); The output will be a list of solutions for 'x'. 3. Plot the equation: To visualize the cubic equation, you can create a plot using the ezplot function. Pass the equation along with the desired x-range. figure; ezplot(equation, [-10, 10]); This will generate a plot of the cubic equation over the specified x-range (-10 to 10). 4. Plot the roots: To indicate the location of the roots on the plot, you can use the hold on command to retain the current plot and then use the scatter function to plot the roots. hold on; scatter (double(roots), zeros(size(roots)), 'ro'); By using the double function, the symbolic roots obtained in step 2 are converted into numeric values for plotting. The 'scatter' function will place red dots (specified by 'ro') at the root positions. 5. Customize the plot: You can further customize the plot by adding a title, axis labels, a grid, and legends using plot handles. title('Cubic Equation'); xlabel('x'); ylabel('y'); grid on; legend ('Cubic Equation', 'Roots'); Feel free to modify the labels and design options according to your preference. 6. Show the plot: Finish by displaying the plot. hold off; The 'hold off' command will turn off the 'hold on' mode, allowing you to interact with the plot. Finally, run your code and the plot of the cubic equation along with the roots will be displayed. How to determine the turning points of a cubic equation graphically in MATLAB? To determine the turning points of a cubic equation graphically in MATLAB, you can follow these steps: 1. Define the cubic equation as a function in MATLAB. For example, suppose you have the equation f(x) = ax^3 + bx^2 + cx + d, where a, b, c, and d are constants. Define the function in MATLAB as: 1 function y = cubicEquation(x, a, b, c, d) 2 y = a*x^3 + b*x^2 + c*x + d; 3 end 1. Choose a range of x-values over which you want to graph the cubic equation. For example, you can choose -10 to 10. Generate a vector of x-values in MATLAB using the linspace function: 1 x = linspace(-10, 10, 100); 1. Evaluate the cubic equation for each x-value to obtain the corresponding y-values. Use the cubicEquation function defined in step 1: 1 y = cubicEquation(x, a, b, c, d); Note: Replace a, b, c, and d in the above code with the actual values you have for your cubic equation. 1. Plot the cubic equation graph using plot function: 1. Locate the turning points on the graph. These points correspond to the local maximum or minimum points on the graph. You can visually identify these points by finding where the slope of the graph changes from positive to negative or vice versa. If you want to find the x and y coordinates of the turning points programmatically, you can use additional MATLAB functions such as diff and sign to calculate the slope and determine where it changes sign. Here's an example: 1 slope = diff(y)./diff(x); % Calculate the slope 2 turningPointIndices = find(diff(sign(slope)) ~= 0); % Find where the slope changes sign 3 turningPoints = [x(turningPointIndices); y(turningPointIndices)]; % Get the x and y coordinates of the turning points This code calculates the slope for each point and then finds the indices where the sign of the slope changes. Finally, it retrieves the x and y coordinates of the turning points. 1. Plot the turning points on the graph: 1 hold on; 2 scatter(turningPoints(1, :), turningPoints(2, :), 'ro'); 3 hold off; This code places red circles at the turning points on the graph. By following these steps, you can determine and plot the turning points of a cubic equation graphically in MATLAB. How do you input a cubic equation in MATLAB? To input a cubic equation in MATLAB, you need to define the coefficients of the equation using the 'polyval' function. Here's an example: 1. Start by defining the coefficients of the cubic equation in descending order. Let's say your cubic equation is: f(x) = ax^3 + bx^2 + cx + d where a, b, c, and d are the coefficients. 2. Define the coefficients in MATLAB using a row vector. For example, if your coefficients are a = 1, b = 2, c = 3, and d = 4, you can define them as: coefficients = [1, 2, 3, 4]; 3. Once the coefficients are defined, you can evaluate the cubic equation using the 'polyval' function. This function takes two arguments: the coefficients and the value of x for which you want to evaluate the equation. For example, to evaluate the cubic equation at x = 2, you can use: x = 2; result = polyval(coefficients, x); The 'result' variable will store the value of f(x) at x = 2. That's it! You have now inputted a cubic equation in MATLAB and evaluated it for a specific value of x. What are the techniques to enhance the visualization of the plot in MATLAB? There are several techniques you can use to enhance the visualization of plots in MATLAB: 1. Adjusting the axis limits: Use the xlim, ylim, and zlim functions to set the range of values displayed on the x, y, and z axes, respectively. This helps to focus on specific areas of interest and eliminate unnecessary whitespace. 2. Adding labels and title: Use the xlabel, ylabel, and zlabel functions to add labels to the x, y, and z axes, respectively. Additionally, use the title function to provide a descriptive title for the plot. 3. Changing line properties: Use the plot function's additional parameters to customize the appearance of the plot, such as line style, color, and marker type. For example, you can use the Color parameter to specify a desired color. 4. Adding annotations: Use the text function to add text annotations to specific points on the plot. You can also use the annotation function to add shapes or arrows to highlight certain features. 5. Using color maps: MATLAB provides various color maps that can be applied to visualize data. The colormap function allows you to select different color maps, and the colorbar function can be used to display a color scale for reference. 6. Plotting multiple data sets: You can plot multiple data sets together using the hold function. This allows you to overlay different plots on the same figure, making it easier to compare and analyze data. 7. Creating subplots: The subplot function allows you to divide a figure into multiple smaller axes, each with its own plot. This is helpful when you want to display related plots side by side, providing a clear visual comparison. 8. Using 3D visualization techniques: If you are working with 3D data, MATLAB provides functions like mesh, surf, and contour3 for creating surface plots. These functions can display data in a three-dimensional space and provide a more comprehensive view of the data. 9. Adjusting plot appearance: MATLAB provides several properties and options to customize the appearance of plots, such as font size, line thickness, grid lines, plot backgrounds, and more. Experiment with these options to enhance the aesthetic appeal and clarity of your plots. 10. Interactive exploration: MATLAB's plotting functions have interactive capabilities that allow you to zoom, pan, rotate, and interact with plots dynamically. These features can help you dive deeper into the details of your data and explore it from different angles. What are the components of a cubic equation? A cubic equation is a polynomial equation of degree 3. It can be written in the form: ax^3 + bx^2 + cx + d = 0 where a, b, c, and d are constants, and x is the variable. The components of a cubic equation are: 1. The constant term (d): It is the coefficient of the x^0 term, also known as the constant term. It represents the y-intercept of the cubic function. 2. The linear term (cx): It is the coefficient of the x term. It represents the slope or gradient of the cubic function. 3. The quadratic term (bx^2): It is the coefficient of the x^2 term. It represents the effect of x's square on the cubic function. 4. The cubic term (ax^3): It is the coefficient of the x^3 term. It represents the highest power of x in the cubic function and determines the shape and behavior of the curve. These components together define the cubic equation and allow us to solve for the values of x that satisfy the equation. The solutions of a cubic equation can be real or complex, and there can be 1 or 3 distinct roots.
{"url":"https://topminisite.com/blog/how-to-solve-and-plot-a-cubic-equation-in-matlab","timestamp":"2024-11-06T15:08:33Z","content_type":"text/html","content_length":"349244","record_id":"<urn:uuid:ebef3f65-a0b4-46bf-8524-67b31c294392>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00638.warc.gz"}
How much Thermal Energy is Created? Marissa drags a 22 kg duffel bag 16 m across the gym floor. If the coefficient of kinetic friction between the floor and bag is 0.15 , how much thermal energy does Marissa create? Known variables: mass (m)= 22 kg drag (s)=16 m coefficient of kinetic friction=(μ) .15 gravity (g)= 9.8 thermal energy equation: μ*m*g*s μ m g s (.15) x (22) x (9.8) x (16) One response to “How much Thermal Energy is Created?” Hello There. I found your blog the use of msn. This is a very smartly written article. I’ll make sure to bookmark it and return to learn more of your helpful info. Thanks for the post. I will certainly return.
{"url":"https://physicsmastered.com/2012/08/12/how-much-thermal-energy-is-created/","timestamp":"2024-11-10T15:43:33Z","content_type":"text/html","content_length":"79500","record_id":"<urn:uuid:6fd15fc5-4c5e-4810-bbf0-eafda321fa2e>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00767.warc.gz"}
You don't need PL/pgsql! You don't need PL/pgsql to create functions that do some pretty sophisticated things. Here's a function written entirely in SQL that returns the inverse cumulative distribution function known in Microsoft Excel™ circles as NORMSINV. CREATE OR REPLACE FUNCTION normsinv(prob float8) RETURNS float8 AS $$ WITH constants(a,b,c,d,p_low, p_high) AS ( ARRAY[-3.969683028665376e+01::float8 , 2.209460984245205e+02 , -2.759285104469687e+02 , 1.383577518672690e+02 , -3.066479806614716e+01 , 2.506628277459239e+00], ARRAY[-5.447609879822406e+01::float8 , 1.615858368580409e+02 , -1.556989798598866e+02 , 6.680131188771972e+01 , -1.328068155288572e+01], ARRAY[-7.784894002430293e-03::float8 , -3.223964580411365e-01 , -2.400758277161838e+00 , -2.549732539343734e+00 , 4.374664141464968e+00 , 2.938163982698783e+00], ARRAY[7.784695709041462e-03::float8 , 3.224671290700398e-01 , 2.445134137142996e+00 , 3.754408661907416e+00], (1 - 0.02425)::float8 intermediate(p, q, r) AS ( prob AS p, WHEN prob < p_low AND prob > p_low THEN sqrt(-2*ln(prob)) WHEN prob >= p_low AND prob <= p_high THEN prob - 0.5 WHEN prob > p_high AND prob < 1 THEN sqrt(-2*ln(1-prob)) ELSE NULL END AS q, WHEN prob >= p_low OR prob <= p_high THEN (prob - 0.5)*(prob - 0.5) ELSE NULL END AS r FROM constants WHEN p < 0 OR p > 1 THEN 'NaN'::float8 WHEN p = 0 THEN '-Infinity'::float8 WHEN p = 1 THEN 'Infinity'::float8 WHEN p < p_low THEN (((((c[1]*q+c[2])*q+c[3])*q+c[4])*q+c[5])*q+c[6]) / WHEN p >= p_low AND p <= p_high THEN (((((a[1]*r+a[2])*r+a[3])*r+a[4])*r+a[5])*r+a[6])*q / WHEN p > p_high THEN -(((((c[1]*q+c[2])*q+c[3])*q+c[4])*q+c[5])*q+c[6]) / ELSE /* This should never happen */ (p*0)/0 /* This should cause an error */ COMMENT ON FUNCTION normsinv(prob float8) IS $$This implementation is taken from https://stackedboxes.org/2017/05/01/acklams-normal-quantile-function/$$; [Edit: There were some typos and wrong functions in the previous version] 1. Hi, it seems that your function returns very different result from Excel's NORMSINV function, for the same input. Can you please check it? 1. Fixed, and thanks for helping track it down! 2. Good example of the power of SQL. I think it can be marked as immutable. 3. Hi Deivid, "WHEN prob < p_low AND prob > p_low THEN sqrt(-2*ln(prob))" has to be changed to "WHEN prob >0 AND prob < p_low THEN sqrt(-2*ln(prob))" In PG when I try to execute the function returns the error: Field "prob" does not exist.
{"url":"https://databasedoings.blogspot.com/2019/02/you-dont-need-plpgsql.html","timestamp":"2024-11-03T23:21:35Z","content_type":"text/html","content_length":"105119","record_id":"<urn:uuid:cd83a5ec-8c0b-49ca-b986-609174dd4270>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00780.warc.gz"}
Measuring Resiliency in Older Adults | Hopkins Bloomberg Public Health Magazine Measuring Resiliency in Older Adults Karen Bandeen-Roche uses statistical methods to identify biological signatures of resilience and frailty. Interview by Melissa Hartman • Photo by Chris Hartlove Unobservables are Karen Bandeen-Roche’s specialty. While some problems associated with aging are readily apparent—walking slows, hearing and vision weaken—some can’t be observed directly, like processes underlying cognitive decline. That’s where Bandeen-Roche, PhD, MS, Hurley Dorrier Professor and Chair in Biostatistics, gets creative. Using surrogate measures—things we can observe, like muscle strength—she develops models that explain the underlying processes of problems such as dementia and frailty. Such models could help identify targets for effective treatment—or even better, prevention. The need is great for these insights: By 2050, the global population of people age 60 and older will number more than 2 billion, compared to 962 million in 2017. How do you define resilience and frailty? My colleagues and I consider the two as manifestations of the same underlying physiology, where the network of systems that can absorb shocks either is well-tuned or loses the ability to compensate. We think of physical resilience in older adults as the ability to recover from stressors such as a surgery, or perhaps a fall or an infection. Maybe [they] take a temporary hit but ultimately rebound to the same level of health they began with. That’s resilience. In frailty, on the other hand, a person becomes vulnerable to adverse outcomes following a stressor. These aren’t easy to measure. Correct. But there are, particularly for frailty, a few measures regarded as the leading approaches in the field. One of them was developed here at Johns Hopkins. My colleagues Linda Fried [now dean of the Columbia Mailman School of Public Health], Jeremy Walston and others developed what’s known as the physical phenotype of frailty, [which includes] several criteria including weakness, low activity, weight loss and slowness. My contribution has been to validate that approaches such as this are succeeding at assessing what they intend. Do any of these tools measure where people are before a stressor to see where they should be after? To create a baseline? This is exactly the frontier of where we’re going with a resiliency study Dr. Walston and I, with others, now have in the field. One pilot study we conducted almost 10 years ago in very old women looked at exactly the idea you just said: You bring people in, you subject them to a mini-stressor—an oral glucose tolerance test is a good example. You measure the insulin response over, say, two hours, so that you get a whole curve of response. You would hypothesize that resilient people should have an appropriate response and then quickly return to their baseline, whereas the frail people might have an exaggerated response and not bounce back nearly as quickly. And in fact, that’s what we see. This is the frontier, we believe, of how to measure both resiliency and frailty before older adults experience a serious stressor, such as surgery. Hopefully we could then do something to either guide their clinical care or to fortify them. What might that look like—physical therapy before a surgery? That’s exactly what a short-term intervention might look like—physical therapy, or perhaps a nutritional boost. It might be pharmaceutical interventions. It’s a hot area of development, whether there are pharmaceutical interventions that, for example, could boost mitochondrial functioning so that energy production is restored more toward what a healthy individual would look like. This is the frontier, we believe, of how to measure both resiliency and frailty before older adults experience a serious stressor such as surgery. Hopefully we could then do something either guide their clinical care or to fortify them. What are the next big questions that need to be answered about frailty? I think there are three big questions. The one I’m most interested in is, what causes frailty, and therefore what can we do to delay the onset of frailty or boost resilience? There’s a second question in the clinical realm. Geriatricians believe it’s beneficial to screen for frailty, or to case-find for frailty. But nobody quite knows what to do once you’ve done that. Developing a randomized, controlled evidence base to answer how persons identified as “frail” should be clinically managed is a super important next step. And then the third question is, how do you identify individuals who are about to become frail? There are measures for “pre-frailty,” but a next generation of those measures would help us identify people who are on the cusp of frailty before they’re too far along to be brought back. You’ve said that you’ve “evolved into a hybrid scientist, roughly equal parts statistician and gerontologist.” How did that happen? By accident. My statistical interest is in learning how to measure things we are able to define conceptually but for which no accurate and precise measures yet exist; the best we can do is to use surrogate measures that together may allow us to infer the target that conceptually you’re after. A framework for evaluating how well the surrogate measures succeed is called latent variable models. About a year after I got here, I met Linda Fried, who I mentioned before. Her aging study seemed interesting. I had no prior interest in aging, but the prospects for public health, reducing suffering and promoting vibrant years of life were so great for older adults. And at the same time, how to measure geriatric concepts like frailty was a good fit with my statistical interests. Have you found anything along the way that particularly surprised you? In work we have done to evaluate the epidemiology of frailty in the U.S., we observed massive disparities in frailty prevalence by race and ethnicity—a 60% to 80% increase in black and Hispanic older adults as compared to their white counterparts. I guess I should have expected it, but just the magnitude of the disparities was surprising to me. We also found disparities by income and geographic regions. There’s lots of targeted prevention work to be done. I don’t think there’s a one-size-fits-all sort of solution.
{"url":"https://magazine.publichealth.jhu.edu/2019/measuring-resiliency-older-adults","timestamp":"2024-11-02T08:02:05Z","content_type":"text/html","content_length":"41962","record_id":"<urn:uuid:d8f66cfa-b549-4752-8ec1-5089fd376330>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00070.warc.gz"}
Yet More Infinite Universes Where the dangers presented by those with a little learning are allayed Yet More Infinite Universes So, it seems that Auntie Beeb has decided to respond to my previous article on infinite universes (where I amply argued that a new universe cannot be brought into being just because you decide not to scratch your nose), by putting up an episode of Horizon. The episode gives three scientists the chance to amend their theories, taking out the bit about new universes being created every time you decide to chew four times instead of five, and put forward the proposal that all the infinite universes have been there all along. This is likely to get quite long, because a lot of real learning has to be clarified, in order to properly highlight the little learning elements, so I shall get to addressing the three hastily modified theories later. First, let's have a little chat about the word "infinite". My Patience (like everything else) is NOT Infinite! I swear, by the end of watching the episode, I was close to the point where, had I heard the terms "infinite" or "Mathematical certainty" again, I would have thrown a brick through the TV. Three more times would have made it an actual Mathematical certainty. I don't know how much Mathematics you know, but if you read the garbage I write, then you can't be at zero (who says mathematicians can't tell a joke?) Well, if you know a little about Maths, then you will know that as soon as you add an infinity to a calculation, It Aint Bloody Mathematics, Any More! Infinities do not, and can not, exist in reality. Infinity is an abstract concept, which can only exist in the abstract. (Oh, God. I can just hear them grinding out the argument "Ah, but some of the infinite universes may be abstract universes!") (D'you know what's wrong with society? You're not allowed to hit people.) But we have to wind back even further, here, to be absolutely clear what numbers are, before we can go into what infinities are(n't). By the numbers... Technically, numbers are "numeric determiners", which, grammatically, allow you to point at things and say how many of them there are. Unlike adjectives and adverbs, numbers do not modify anything except themselves, i.e. the only thing that adding a number changes is the number. For example, if you have a car, you can describe it by saying it's a blue car or a really fast car. That's what adjectives and adverbs do: they change the item that you attach them to. "A car" and "a blue car" are different because (the definition of) one of them has been modified by having a particular colour added, while the other could be any colour. Add a determiner, though, and you don't modify (the definition of) anything: "that car" is still "a car", "my car" is still "a car", and indeed "a car" is still "a car" ("a" is a determiner). Nothing about (the definition of) the car changes; determiners just determine things like which, or whose, or how many. The "how many", of course, is given by numeric determiners, numbers. Whether you say "a car" or "six cars", (the definitions of) the cars aren't modified, only the number is. Numerical determiners, Huah! What are they GOOD for? Quite a lot, actually. So numbers are, and have always been, abstract, because numbers themselves do not change anything about (the definition of) reality -- they are used to simply point at things and communicate how many of them there are. Then people smarter than us invented Mathematics (any moron can use something that's already been invented, remember, so those guys thousands of years ago were much smarter than us). One of the primary functions of Mathematics is to allow you to work out "how many" or "how much" without actually getting out of your seat. That is, instead of going out and buying two trains, taking them to opposing stations, and making them run toward each other at different speeds, you can just sit in your chair and work out where they will pass each other. Or crash into each other, if you're a boy. This "working it out from your chair" method uses abstraction, where you don't have real trains, you have abstract thingies, which don't really exist, to represent the trains; and you don't have real speeds or distances, either; you have abstract thingies to represent them. The abstract thingies for things like speed and distance are called numbers. And they're very versatile; abstractions are incredibly powerful tools, e.g. If you have two trains that are 100 [unit of measurement] apart, travelling toward each other at 50 [unit of velocity], it's really easy to calculate when that will pass/crash -- and it doesn't matter what the units of measurement or velocity are! Kilometers? Miles? Hours? Minutes? It doesn't make a blind bit of difference! The way to calculate it, the formula, is the same for all units of measuement and all types of moving object! Like, Wowzer! Just by using abstracts, we've managed to create a furmula that is pretty much universal! We don't need reality, any more! We can do it all with numbers! Er, yeah. Dare Numeri (It's not pronounced "dair", it's "dah-rei" -- it's Italian, look it up.) If you've looked it up, you'll have found that it means "give numbers", which translates, in this context, to "talk bollocks". (Aren't you annoyed that you took the time to look it up, now?) There's probably no need for me to explain that the reason why Italians have this phrase is that so many people use numbers to talk bollocks. Because numbers are abstract, you can do things with them that you can't really do with real things. E.g. if you wanted to say that your two trains are a hundred parsecs apart, travelling at fifty parsecs per hour, you're quite welcome to, because the numbers don't care; they're not real. However, if you tried to perform the calculation the old-fashioned way, from back before when people smarter than us invented Mathematics, you would have to: 1. Get two trains, which you probably can't do, because the missus won't let you spend that much money on something that isn't shoes. 2. Put them a hundred parsecs apart, which you certainly cannot do, because, well, because you can't. The entirety of the resources of the whole, wide world could not do it. 3. Set them off toward each other at 50 parsecs an hour, which you absolutely cannot do, because, er, speed of light, y'know? Like I say, though: the numbers don't care. They don't care whether or not the thing that you're calculating can actually happen in reality. They're just numbers. They don't think. You are the one who is supposed to be able to think, and do things like reality checks. So, Back to Infinity It doesn't happen. Infinities just do not happen In Mathematics, it is possible to speculate on infinity, because Mathematics just uses numbers, which are abstract, and it's impossible to say anything other than "there is no highest number, therefore numbers go on to infinity", but -- shall I repeat the word -- they are Abstract! Abstract means Not Real! Let's have a look at what people who are not air-headed do • Hooke was not air-headed, when he came up with "Hooke's Law", which states that an expansive spring will expand relative to the force applied until its elastic limit is reached. He could quite easily have followed the airhead path, and said that an expansive spring will expand relative to the force applied until infinity, because numbers would support that, but, as I said, he was not an airhead, so instead he came up with what is perhaps the most important rule ever invented – that stuff happens relative to stuff that happens until something else happens, Remember that law. It is one of the most important laws that you will ever learn – and the next time someone says "Oh yeah, Hooke. He was the guy who did stuff with springs, wasn't he?", you give 'em a good slap (societal tenets be damned) and point out that that law is pure genius, and only a great genius could have invented it. • Kelvin was many things, but he was not an airhead when he came up with Absolute Zero. Instead of saying that matter can get colder and colder until infinity, because numbers would support that, he said that matter can get colder and colder until it all breaks down, and the numbers don't work, any more, and he flogged a few slaves until they worked out where that point was. So he (or his slaves, at least) also understood the universal rule that stuff happens relative to stuff that happens until something else happens, instead One of the first things you learn to do, in any field of science, is to look for When Something Starts Happening, and When it Stops Happening. Because everything has a start point and an end point! There Is No F&ˆ%$£* Infinity! ... Unless you're an airhead who believes that abstract things are real, which, by definition, they are not. Now let's have a look at the world of the airhead • "If an infinite number of monkeys bashed away at an infinite number of typewriters, one of them would eventually write all of Shakespeare's plays" Lovely. Beautiful idea for a bit of a laugh. But... □ Long before the number of monkeys could even get anywhere near infinity, you would either run out of food, or somewhere to put all their sht1 – or both. □ Long before the number of typewriters could even get anywhere near infinity, you would run out of materials to make them with. So it's just pipe-dreamy, abstract stuff; a bit of fun that's completely disconnected from reality, and should in no way be used for any calculation of anything. • If enough matter gets together in one lump, it will collapse into a super-duper-ultra black hole, with infinite gravity! □ Er, yeah. May I just point out that if you achieve infinite gravity, the entire universe will collapse into it in zero time. The saving grace here is that it could not happen, because to achieve infinite gravity would probably, because of time dilation, require infinite time – so Something Else would Happen! In fact, I'm yet to be convinced that the universe has been around for long enough for even a rinky-dink "normal" black hole to have formed. The most painful thing about this is that no-one is looking for the point where something else will happen. Everyone is just assuming that the numbers can be extrapolated to infinity, and that something else won't happen. Hooke and Kelvin would be so ashamed, to see the air-headedness of scientists who claim to be their betters. OK, I'm done, for now. Coming Soon... In our next, exciting episode, I will address the grand theories on infinite universes put forward by these scientists (yes, they really are scientists – they are actually paid to [S:talk bollocks:S] do the job) (I'm going to propose that they be paid with abstract money). I must say that I feel I will take particular delight in blowing one of the "proofs" out of the water – in fact, it's not just a proof, it's more like the basis for the entire theory. ... And it's something that I figured out the bleeding-obvious reality of when I was a teenager (but I was a teenage, human male, at the time, so I didn't talk to adult humans unless I absolutely had to). It makes me wonder, though, how many other bleeding-obvious things I or other people have taken for granted that might be puzzling everyone else (that's the trouble with the bleeding obvious: you either see it, or you don't, and you just can't see it all the time). So what I'll do now is pull a Fermat, and pop my clogs before I get a chance to write the next page. Go back to the Grumpy Old Scribe index page Go back to the main site
{"url":"https://mwallace.nl/gos/articles/science/infinite-universes-2.htm","timestamp":"2024-11-07T22:13:57Z","content_type":"text/html","content_length":"16177","record_id":"<urn:uuid:90faffec-492d-4b1d-a460-3204977fb096>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00055.warc.gz"}
Deeparnab Chakrabarty: Parallel Submodular Function Minimization Theory Seminar Deeparnab Chakrabarty: Parallel Submodular Function Minimization Deeparnab ChakrabartyDartmouth College 3725 Beyster Building PASSCODE: 430018 Submodular functions are fundamental objects in discrete optimization arising in various areas from computer science to economics. They are set functions which prescribe a value to every subset of an n-element universe, and are defined “locally” as follows: for every subset A of the universe and elements i & j not in A, f(A) + f(A + i + j) is at most f(A+i) + f(A+j). This property alone leads to tractability of many optimization problems: a remarkable one among them is that submodular function minimization (SFM), that is, finding the global minima of such set functions, can be done using polynomially many queries. In this talk, we will mainly focus on “lower bounds” or the “limitations of locality”. Most of the talk will be about recent works on the *parallel complexity* of SFM: how many *rounds* of queries are needed for efficient SFM? Can this be done with only constant or logarithmic rounds? The answer is “no”, and we will describe some constructions and quantify this. We will also discuss some questions left open by these works. This talk is based on joint works with Yu Chen, Andrei Graur, Haotian Jiang, Sanjeev Khanna, and Aaron Sidford Greg Bodwin Euiwoong Lee
{"url":"https://theory.engin.umich.edu/event/deeparnab-chakrabarty-parallel-submodular-function-minimization","timestamp":"2024-11-10T23:56:24Z","content_type":"text/html","content_length":"43974","record_id":"<urn:uuid:2bdca6be-0795-4dc6-9f0e-c19f6c5e34fe>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00666.warc.gz"}
Machine Learning in Python Path overview In this path, you’ll gain a strong understanding of supervised and unsupervised machine learning algorithms. You’ll also learn some of the most important and used algorithms and techniques to build, customize, train, test and optimize your predictive models such as linear regression modeling, gradient descent, logistic regression modeling and decision tree and random forest modeling. Finally, you’ll learn optimization techniques that will help you to improve efficiency and accuracy. Best of all, you’ll learn by doing — you’ll practice and get feedback directly in the browser. You’ll apply your skills to several guided projects with realistic business scenarios to build your portfolio and prepare for your next interview. Key skills • Understanding the core mathematical concepts behind machine learning • Identifying applications of supervised and unsupervised machine learning models • Using algorithms such as linear regression, logistic regression and gradient descent • Applying optimization methods to improve your models Path outline Part 1: Machine Learning In Python [7 courses] • Establish a machine learning workflow • Implement the K-Nearest Neighbors algorithm for a classification task from scratch using Pandas • Implement the K-Nearest Neighbors algorithm using scikit-learn • Evaluate a machine learning model • Find optimal hyperparameter values using grid search • Identify applications of unsupervised machine learning • Implement a basic k-means algorithm • Evaluate and optimize the performance of a k-means model • Visualize the model • Build a k-means model using scikit-learn • Describe a linear regression model • Construct a linear regression model and evaluate it based on the data • Interpret the results of a linear regression model • Use a linear regression model for inference and prediction • Code a basic Gradient Descent algorithm • Recognize the limitations of basic Gradient Descent • Contrast the basic Batch and Stochastic Gradient Descent uses • Visualize Stochastic Gradient Descent using Matplotlib • Apply Stochastic Gradient Descent in Python using Scikit Learn • Describe a logistic regression model • Construct a logistic regression model and evaluate it based on the data • Interpret the results of a logistic regression model • Use a logistic regression model for inference and prediction • Create, customize, and visualize decision trees • Use and interpret decision trees on new data • Calculate optimal decision paths • Optimize trees by altering their parameters • Apply the random forest prediction technique • Distinguish between different optimization techniques • Identify the best optimization approach for your project • Apply optimization methods to improve your model • Employ machine learning tools on various optimization methods The Dataquest guarantee Dataquest has helped thousands of people start new careers in data. If you put in the work and follow our path, you’ll master data skills and grow your career. We believe so strongly in our paths that we offer a full satisfaction guarantee. If you complete a career path on Dataquest and aren’t satisfied with your outcome, we’ll give you a refund. Master skills faster with Dataquest Go from zero to job-ready Learn exactly what you need to achieve your goal. Don’t waste time on unrelated lessons. Build your project portfolio Build confidence with our in-depth projects, and show off your data skills. Challenge yourself with exercises Work with real data from day one with interactive lessons and hands-on exercises. Showcase your path certification Share the evidence of your hard work with your network and potential employers. Projects in this path Predicting Heart Disease For this project, we’ll take on the role of a data scientist at a healthcare solutions company to build a model that predicts a patient’s risk of developing heart disease based on their medical data. Credit Card Customer Segmentation For this project, we’ll play the role of a data scientist at a credit card company to segment customers into groups using K-means clustering in Python, allowing the company to tailor strategies for each segment. Predicting Insurance Costs For this project, you’ll step into the role of a data analyst tasked with developing a model to predict patient medical insurance costs based on demographic and health data. Stochastic Gradient Descent on Linear Regression For this project, we’ll step into the role of data scientists aiming to predict the optimal time to go to the gym to avoid crowds. We’ll build a stochastic gradient descent linear regression model using Python. Classifying Heart Disease For this project, you’ll assume the role of a medical researcher aiming to develop a logistic regression model to predict heart disease in patients based on their clinical characteristics. Grow your career with of learners recommend Dataquest for career advancement Dataquest rating SwitchUp Best Bootcamps Average salary boost for learners who complete a path Join 1M+ data learners on Complete exercises and projects
{"url":"https://www.dataquest.io/path/machine-learning-in-python/","timestamp":"2024-11-14T00:56:35Z","content_type":"text/html","content_length":"143910","record_id":"<urn:uuid:9ad4bcd6-ccf8-426a-94f5-ab09a04192c8>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00386.warc.gz"}
Lesson 22 Combining Like Terms (Part 3) 22.1: Are They Equal? (5 minutes) The purpose of this activity is to remind students of things they learned in the previous lesson using numerical examples. Look for students who evaluate each expression and students who use reasoning about operations and properties. Remind students that working with subtraction can be tricky, and to think of some strategies they have learned in this unit. Encourage students to reason about the expressions without evaluating Give students 2 minutes of quiet think time followed by whole-class discussion. Student Facing Select all expressions that are equal to \(8-12-(6+4)\). 1. \(8-6-12+4\) 2. \(8-12-6-4\) 3. \(8-12+(6+4)\) 4. \(8-12-6+4\) 5. \(8-4-12-6\) Anticipated Misconceptions Students who selected \(8-6-12+4\) or \(8-12+(6+4)\) might not understand that the subtraction sign outside the parentheses applies to the 4 and that 4 is always to be subtracted in any equivalent Students who selected \(8-12+(6+4)\) might think the subtraction sign in front of 12 also applies to \((6+4)\) and that the two subtractions become addition. Activity Synthesis For each expression, poll the class for whether the expression is equal to the given expression, or not. For each expression, select a student to explain why it is equal to the given expression or not. If the first student reasoned by evaluating each expression, ask if anyone reasoned without evaluating each expression. 22.2: X’s and Y’s (15 minutes) In this activity students take turns with a partner and work to make sense of writing expressions in equivalent ways. This activity is a step up from the previous lesson because there are more negatives for students to deal with, and each expression contains more than one variable. Arrange students in groups of 2. Tell students that for each expression in column A, one partner finds an equivalent expression in column B and explains why they think it is equivalent. The partner's job is to listen and make sure they agree. If they don't agree, the partners discuss until they come to an agreement. For the next expression in column A, the students swap roles. If necessary, demonstrate this protocol before students start working. Representation: Internalize Comprehension. Differentiate the degree of difficulty or complexity by beginning with an example with more accessible values. For example, start with an expression with three terms such as “\(6x-(2x+8)\)" and show different forms of equivalent expressions. Highlight connections between expressions by using the same color on equivalent parts of the expression. Supports accessibility for: Conceptual processing Listening, Speaking: MLR8 Discussion Supports. Display sentence frames for students to use to describe the reasons for their matches. For example, “I matched expression ___ with expression ___ because . . . .” or “I used the ___ property to help me match expression ___ with expression ___.” Provide a sentence frame for the partner to respond with, such as: “I agree/disagree with this match because . . . .” These sentence frames provide students with language structures that help them to produce explanations, and also to critique their partner’s reasoning. Design Principle(s): Maximize meta-awareness; Support sense-making Student Facing Match each expression in column A with an equivalent expression from column B. Be prepared to explain your reasoning. 1. \((9x+5y) + (3x+7y)\) 2. \((9x+5y) - (3x+7y)\) 3. \((9x+5y) - (3x-7y)\) 4. \(9x-7y + 3x+ 5y\) 5. \(9x-7y + 3x- 5y\) 6. \(9x-7y - 3x-5y\) 1. \(12(x+y)\) 2. \(12(x-y)\) 3. \(6(x-2y)\) 4. \(9x+5y+3x-7y\) 5. \(9x+5y-3x+7y\) 6. \(9x-3x+5y-7y\) Anticipated Misconceptions For the second and third rows, some students may not understand that the subtraction sign in front of the parentheses applies to both terms inside that set of parentheses. Some students may get the second row correct, but not realize how the third row relates to the fact that the product of two negative numbers is a positive number. For the last three rows, some students may not recognize the importance of the subtraction sign in front of \(7y\). Prompt them to rewrite the expressions replacing subtraction with adding the inverse. Students might write an expression with fewer terms but not recognize an equivalent form because the distributive property has been used to write a sum as a product. For example, \(9x-7y + 3x- 5y\) can be written as \(9x+3x−7y−5y\) or \(12x-12y\), which is equivalent to the expression \(12(x-y)\) in column B. Encourage students to think about writing the column B expressions in a different form and to recall that the distributive property can be applied to either factor or expand an expression. Activity Synthesis Much discussion takes place between partners. Invite students to share how they used properties to generate equivalent expressions and find matches. • “Which term(s) does the subtraction sign apply to in each expression? How do you know?” • “Were there any expressions from column A that you wrote with fewer terms but were unable to find a match for in column B? If yes, why do you think this happened?” • “What were some ways you handled subtraction with parentheses? Without parentheses?” • “Describe any difficulties you experienced and how you resolved them.” 22.3: Seeing Structure and Factoring (10 minutes) This activity is an opportunity to notice and make use of structure (MP7) in order to apply the distributive property in more sophisticated ways. Display the expression \(18-45+27\) and ask students to calculate as quickly as they can. Invite students to explain their strategies. If no student brings it up, ask if the three numbers have anything in common (they are all multiples of 9). One way to quickly compute would be to notice that \(18-45+27\) can be written as \(2\boldcdot 9 -5\boldcdot 9 +3\boldcdot 9\) or \((2-5+3)\boldcdot 9\) which can be quickly calculated as 0. Tell students that noticing common factors in expressions can help us write them with fewer terms or more simply. Keep students in the same groups. Give them 5 minutes of quiet work time and time to share their expressions with their partner, followed by a whole-class discussion. Action and Expression: Internalize Executive Functions. To support development of organizational skills, check in with students within the first 2–3 minutes of work time. Look for students who identify common factors or rearrange terms to write the expressions with fewer terms. Supports accessibility for: Memory; Organization Student Facing Write each expression with fewer terms. Show or explain your reasoning. 1. \(3 \boldcdot 15 + 4 \boldcdot 15 - 5 \boldcdot 15 \) 2. \(3x + 4x - 5x\) 3. \(3(x-2) + 4(x-2) - 5(x-2) \) 4. \(3\left(\frac52x+6\frac12\right) + 4\left(\frac52x+6\frac12\right) - 5\left(\frac52x+6\frac12\right)\) Activity Synthesis For each expression, invite a student to share their process for writing it with fewer terms. Highlight the use of the distributive property. Speaking: MLR8 Discussion Supports. Provide sentence frames to help students explain their strategies. For example, “I noticed that ______, so I ______.” or “First, I ________ because ________.” When students share their answers with a partner, prompt them to rehearse what they will say when they share with the full group. Rehearsing provides students with additional opportunities to clarify their thinking. Design Principle(s): Optimize output (for explanation) Lesson Synthesis Ask students to reflect on their work in this unit. They can share their response to one or more of these prompts either in writing or verbally with a partner. • “Describe something that you found confusing at first that you now understand well.” • “Think of a story problem that you would not have been able to solve before this unit that you can solve now.” • “What is a tool or strategy that you learned in this lesson that was particularly useful?” • “Describe a common mistake that people make when using the ideas we studied in this unit and how they can avoid that mistake.” • “Which is your favorite, and why? The distributive property, rewriting subtraction as adding the opposite, or the commutative property.” 22.4: Cool-down - R's and T's (5 minutes) Student Facing Combining like terms is a useful strategy that we will see again and again in our future work with mathematical expressions. It is helpful to review the things we have learned about this important • Combining like terms is an application of the distributive property. For example: \(\begin{gather} 2x+9x\\ (2+9) \boldcdot x \\ 11x\\ \end{gather}\) • It often also involves the commutative and associative properties to change the order or grouping of addition. For example: \(\begin{gather} 2a+3b+4a+5b \\ 2a+4a+3b+5b \\ (2a+4a)+(3b+5b) \\ 6a+8b\\ \end{gather}\) • We can't change order or grouping when subtracting; so in order to apply the commutative or associative properties to expressions with subtraction, we need to rewrite subtraction as addition. For \(\begin{gather} 2a-3b-4a-5b \\ 2a+\text-3b+\text-4a+\text-5b\\ 2a + \text-4a + \text-3b + \text-5b\\ \text-2a+\text-8b\\ \text-2a-8b \\ \end{gather}\) • Since combining like terms uses properties of operations, it results in expressions that are equivalent. • The like terms that are combined do not have to be a single number or variable; they may be longer expressions as well. Terms can be combined in any sum where there is a common factor in all the terms. For example, each term in the expression \(5(x+3)-0.5(x+3)+2(x+3)\) has a factor of \((x+3)\). We can rewrite the expression with fewer terms by using the distributive property: \(\begin{gather} 5(x+3)-0.5(x+3)+2(x+3)\\ (5-0.5+2)(x+3)\\ 6.5(x+3)\\ \end{gather}\)
{"url":"https://curriculum.illustrativemathematics.org/MS/teachers/2/6/22/index.html","timestamp":"2024-11-04T12:12:52Z","content_type":"text/html","content_length":"92940","record_id":"<urn:uuid:ae85c9f9-935c-4b14-9631-fb6d6782ed8a>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00725.warc.gz"}
Help Please And Thank You On the x intercept of the graph, the y value is always 0. We can substitute y with 0. Therefore 4x-2(0)=-8. 4x=-8. x=-8/4= -2 Hence coordinates of x interfept = (-2,0) On the y intercept, the x coordinate is 0. Hence 4(0) -2y = -8 -2y= -8 y=-8/-2 = 4 Y intercept coordinate is (0.4) The new triangle A'B'C' shown in the attached graph has the coordinates of B' are (2, 8), A' are (1, 6), and C' (4, 6). What are congruent triangles? Two triangles are said to be congruent if their corresponding sides and angles are equal. The given triangle ABC shown in the graph has: The coordinates of B are (-7, 6), The coordinates of A are (-8,4), The coordinates of C are (-5, 4) If B' is the translated position of B. The new triangle A'B'C' shown in the attached graph has: The coordinates of B' are (2, 8) The coordinates of A' are (1, 6) The coordinates of C' are (4, 6) Length of AB = A'B' = √5 Length of BC = B'C' = 2√2 Length of CA = C'A' = 3 Learn more about congruent triangles here:
{"url":"https://academiadeidiomastepeyac.edu.mx/question/help-please-and-thank-you-tmlf","timestamp":"2024-11-05T22:35:37Z","content_type":"text/html","content_length":"73277","record_id":"<urn:uuid:39510aa6-ead8-4689-9561-1b42ebc09a9c>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00655.warc.gz"}
Reducing the Number of Tests Using Bayesian Inference to Identify Infected Patients in Group Testing - JPS Hot Topics Reducing the Number of Tests Using Bayesian Inference to Identify Infected Patients in Group Testing JPS Hot Topics 1, 021 © The Physical Society of Japan This article is on Bayesian Inference of Infected Patients in Group Testing with Prevalence Estimation (JPSJ Editors' Choice) J. Phys. Soc. Jpn. 89, 084001 (2020) . Group testing is a method of identifying infected patients by performing tests on a pool of specimens. Bayesian inference and a corresponding belief propagation (BP) algorithm are introduced to identify the infected patients in group testing. Bayesian Inference of Infected Patients in Group Testing with Prevalence Estimation (JPSJ Editors' Choice) J. Phys. Soc. Jpn. 89, 084001 (2020) . Share this topic
{"url":"https://jpsht.jps.jp/article/1-021/","timestamp":"2024-11-02T06:02:57Z","content_type":"text/html","content_length":"74125","record_id":"<urn:uuid:ca344571-24b3-4f98-a9a2-c5c28cad5883>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00076.warc.gz"}
What is Quantum Computing? | HackerNoon Quantum computing is the area of study focused on developing computer technology based on the principles of quantum theory. The quantum computer, following the laws of quantum physics, would gain enormous processing power through the ability to be in multiple states, and to perform tasks using all possible permutations simultaneously. A Comparison of Classical and Quantum Computing Classical computing relies, at its ultimate level, on principles expressed by Boolean algebra. Data must be processed in an exclusive binary state at any point in time or bits. While the time that each transistor or capacitor need be either in 0 or 1 before switching states is now measurable in billionths of a second, there is still a limit as to how quickly these devices can be made to switch state. As we progress to smaller and faster circuits, we begin to reach the physical limits of materials and the threshold for classical laws of physics to apply. Beyond this, the quantum world takes over. In a quantum computer, a number of elemental particles such as electrons or photons can be used with either their charge or polarization acting as a representation of 0 and/or 1. Each of these particles is known as a quantum bit, or qubit, the nature and behavior of these particles form the basis of quantum computing. Quantum Superposition and Entanglement The two most relevant aspects of quantum physics are the principles of superposition and entanglement. Think of a qubit as an electron in a magnetic field. The electron’s spin may be either in alignment with the field, which is known as a spin-up state, or opposite to the field, which is known as a spin-down state. According to quantum law, the particle enters a superposition of states, in which it behaves as if it were in both states simultaneously. Each qubit utilized could take a superposition of both 0 and 1. Particles that have interacted at some point retain a type of connection and can be entangled with each other in pairs, in a process known as correlation. Knowing the spin state of one entangled particle — up or down — allows one to know that the spin of its mate is in the opposite direction. Quantum entanglement allows qubits that are separated by incredible distances to interact with each other instantaneously (not limited to the speed of light). No matter how great the distance between the correlated particles, they will remain entangled as long as they are isolated. Taken together, quantum superposition and entanglement create an enormously enhanced computing power. Where a 2-bit register in an ordinary computer can store only one of four binary configurations (00, 01, 10, or 11) at any given time, a 2-qubit register in a quantum computer can store all four numbers simultaneously, because each qubit represents two values. If more qubits are added, the increased capacity is expanded exponentially. Difficulties with Quantum Computers — During the computation phase of a quantum calculation, the slightest disturbance in a quantum system (say a stray photon or wave of EM radiation) causes the quantum computation to collapse, a process known as de-coherence. A quantum computer must be totally isolated from all external interference during the computation phase.Error correction — Given the nature of quantum computing, error correction is ultra critical — even a single error in a calculation can cause the validity of the entire computation to collapse.Output observance — Closely related to the above two, retrieving output data after a quantum calculation is complete risks corrupting the data. The Future of Quantum Computing The biggest and most important one is the ability to factorize a very large number into two prime numbers. That’s really important because that’s what almost all encryption of internet applications use and can be de-encrypted. A quantum computer should be able to do that relatively quickly. Calculating the positions of individual atoms in very large molecules like polymers and in viruses. The way that the particles interact with each other — if you have a quantum computer you could use it to develop drugs and understand how molecules work a bit Even though there are many problems to overcome, the breakthroughs in the last 15 years, and especially in the last 3, have made some form of practical quantum computing possible. However, the potential that this technology offers is attracting tremendous interest from both the government and the private sector. It is this potential that is rapidly breaking down the barriers to this technology, but whether all barriers can be broken, and when, is very much an open question. Ahmed Banafa, Author the Books : Secure and Smart Internet of Things (IoT) Using Blockchain and AI Blockchain Technology and Applications Read more articles at Technology Trends by Prof. Ahmed Banafa
{"url":"https://hackernoon.com/what-is-quantum-computing-ooy125vh","timestamp":"2024-11-03T13:35:44Z","content_type":"text/html","content_length":"222237","record_id":"<urn:uuid:40b7a1c8-d437-46ef-8416-815445ef54c2>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00448.warc.gz"}
Depth First Search (DFS) | CS61B Guide Depth First Traversal Before we move on to searching, let's talk about traversing. Traversal is the act of visiting nodes in a specific order. This can be done either in trees or in graphs. For trees in particular, there are three main ways to traverse. The first way is inorder traversal, which visits all left children, then the node itself, then all right children. The end result should be that the nodes were visited in sorted order. The second way is preorder traversal, which visits the node itself first, then all left children, then all right children. This method is useful for applications such as printing a directory tree The third way is postorder traversal, which visits all left children, then all right children, then finally the node itself. This method is useful for when operations need to be done on all children before the result can be read in the node, for instance getting the sizes of all items in the folder. Here are some pseudocodey algorithms for tree traversals. // INORDER will print A B C D E F G void inOrder(Node x) { if (x == null) return; // PREORDER will print D B A C F E G void preOrder(Node x) { if (x == null) return; // PREORDER will print A C B E G F D void postOrder(Node x) { if (x == null) return; Depth First Search in Graphs Graphs are a little more complicated to traverse due to the fact that they could have cycles in them, unlike trees. This means that we need to keep track of all the nodes already visited and add to that list whenever we encounter a new node. Depth First Search is great for determining if everything in a graph is connected. Here's an outline of how this might go: Keep an array of 'marks' (true if node has been visited) and, optionally, an edgeTo array that will automatically keep track of how to get to each connected node from a source node When each vertex is visited: For each adjacent unmarked vertex: Set edgeTo of that vertex equal to this current vertex Call the recursive method on that vertex Like trees, DFS can be done inorder, preorder, or postorder. It's nearly identical behavior to trees, with the addition of the marks array.
{"url":"https://cs61b.bencuan.me/algorithms/searching/depth-first-search-dfs","timestamp":"2024-11-13T10:49:06Z","content_type":"text/html","content_length":"268450","record_id":"<urn:uuid:7e727a30-e9f8-4231-b5fe-f28d44316cf2>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00171.warc.gz"}
Quantum mechanics overs sets (QM/ℤ₂ or QM/Sets) is a pedagogical or `toy’ model of finite-dimensional quantum mechanics (QM/ℂ) that reproduces in the simplified setting of vector spaces over ℤ₂ the essentials of projective measurements, the double-slit experiment, the indeterminacy principle, entanglement, Bell’s Theorem, the statistics of indistinguishable particles, and so forth,
{"url":"https://www.ellerman.org/category/qm/page/2/","timestamp":"2024-11-13T01:33:13Z","content_type":"application/xhtml+xml","content_length":"61695","record_id":"<urn:uuid:e5e3657d-ea7b-4c1b-bef7-9a7abad78d17>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00545.warc.gz"}
How do you find the missing percent abundance? To calculate the percent abundance of each isotope in a sample of an element, chemists usually divide the number of atoms of a particular isotope by the total number of atoms of all isotopes of that element and then multiply the result by 100. How do you find the natural abundance of AMU? Example 1: Determine the relative abundance of the isotopes if the masses of one isotope of nitrogen, nitrogen-14, are 14.003 amu and another isotope, nitrogen-15, are 15.000 amu. Solution: The average atomic mass of Nitrogen is 14.007 amu. Therefore, percent abundance for x = 99.6% and (1 โ x) = 0.004 = 0.4%. How do you find the missing Amu of an isotope? How do you find the unknown atomic mass? To calculate the atomic mass of a single atom of an element, add up the mass of protons and neutrons. Example: Find the atomic mass of an isotope of carbon that has 7 neutrons. You can see from the periodic table that carbon has an atomic number of 6, which is its number of protons. How do you find abundance in chemistry? 1. (M1)(x) + (M2)(1-x) = M(E) 2. Example problem: If the masses of one isotope of nitrogen, nitrogen-14, is 14.003 amu and another isotope, nitrogen-15, is 15.000 amu, find the relative abundance of the isotopes. 3. Use algebra to solve for x. 4. x = 0.996. How do you find percent abundance without atomic mass? Is u the same as AMU? The new unit was given the symbol u to replace amu, plus some scientists called the new unit a Dalton. However, u and Da were not universally adopted. Many scientists kept using the amu, just recognizing it was now based on carbon rather than oxygen. How do you calculate the average atomic mass of an isotope? How do you calculate atomic mass with percent abundance and isotopes? Step 1: List the known and unknown quantities and plan the problem. Change each percent abundance into decimal form by dividing by 100. Multiply this value by the atomic mass of that isotope. Add together for each isotope to get the average atomic mass. What do you mean by 1 amu? One AMU is the average of the proton rest mass and the neutron rest mass. This can be expressed as the following: 1 AMU = 1.67377 x 10-27 kilograms = 1.67377 x 10-24 grams. Carbon-12 is considered a reference for all atomic mass calculations. What is 1 amu or 1u? Define one Atomic Mass Unit (a.m.u.) It is denoted by amu (atomic mass unit) or simply u . One atomic mass unit (1u) is a mass unit equal to exactly one-twelfth (1/12th) the mass of one atom of carbon-12 isotope. What is the mass of 1 amu? It is represented as a.m.u or u ( unified). 1 a.m.u is the average of the proton rest mass and the neutron rest mass. 1 a.m.u. = 1.67377 x 10-27 kilogram or 1.67377 x 10-24gram. What is amu chemistry? The atomic mass of an element is the average mass of the atoms of an element measured in atomic mass unit (amu, also known as daltons, D). The atomic mass is a weighted average of all of the isotopes of that element, in which the mass of each isotope is multiplied by the abundance of that particular isotope. What is average atomic mass in chemistry? The average atomic mass (sometimes called atomic weight) of an element is the weighted average mass of the atoms in a naturally occurring sample of the element. Average masses are generally expressed in unified atomic mass units (u), where 1 u is equal to exactly one-twelfth the mass of a neutral atom of carbon-12. What is the percent abundance of an isotope? The relative abundance of an isotope is the percentage of atoms with a specific atomic mass found in a naturally occurring sample of an element. How do you find the abundance of 3 isotopes? What is the value of 1 amu in grams? 1 amu is equal to 1.66 ร 10^-24 g. What has A mass of 0 amu? electron: a subatomic particle found outside the nucleus of an atom. It has charge of โ 1 and a mass of 0 amu (really about 1/2000 amu). Is amu same as grams? Gram is used in our day to day life to express the mass of goods that we use whereas amu is used for minute scale measurements. The main difference between amu and grams is that amu is used to express the mass in atomic level whereas gram is used as a metric unit of mass. Is amu equal to g mol? Therefore we just proved that an atomic mass unit is the same thing as grams per mole. How many amu are there in 1g? One gram is equivalent to 6.022 x 1023 AMU. Is Dalton same as amu? Actually, ‘dalton’ and ‘unified atomic mass unit‘ are alternative names for the same unit, equal to 1/12 times the mass of a free carbon-12 atom, at rest and in its ground state, i.e. Is amu equal to mass number? The Mass Number is the number of nucleons (protons+nucleons) in a nucleus, so it is a dimensionless magnitude. The atomic mass unit (AMU) however is a mass unit, it measures mass. 1 AMU is defined as the twelfth of the mass of a C12 atom. Why is amu equal to molar mass? Atomic and molar masses are interconvertible because they both are the weights of the same element. Moreover, a.m.u is numerically equal to the grams in one mole of a substance. What would 1 amu be in grams of Measured on A chemical balance? How is amu defined? unit of mass used to express atomic mass & molecular mass. Given that 1 amu = 1.6606 x 10^-24 g, what is the mass of one mole of hydrogen atoms?
{"url":"https://scienceoxygen.com/how-do-you-find-the-missing-percent-abundance/","timestamp":"2024-11-06T11:01:00Z","content_type":"text/html","content_length":"309423","record_id":"<urn:uuid:4218e11f-ed7b-44b4-a77b-ed76b6f988d8>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00382.warc.gz"}
Simultaneously 7748 - math word problem (7748) Simultaneously 7748 The monitor screen is blank. Four new wheels appear on the screen every seven seconds when the beep sounds. On the contrary, three wheels disappear from the screen every eleven seconds. If both actions should take place simultaneously, the number of circles on the screen will not change. A) Specify the number of circles on the screen 1 minute after the beep. B) Specify the number of circles on the screen 5 minutes after the beep. Correct answer: Did you find an error or inaccuracy? Feel free to write us . Thank you! Tips for related online calculators Do you want to perform natural numbers division - find the quotient and remainder You need to know the following knowledge to solve this word math problem: Related math problems and questions:
{"url":"https://www.hackmath.net/en/math-problem/7748","timestamp":"2024-11-02T11:13:40Z","content_type":"text/html","content_length":"72831","record_id":"<urn:uuid:95d3710c-18f5-4ebe-a8b2-10f908c054d9>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00487.warc.gz"}
Describing patterns and sequences There are 124 NRICH Mathematical resources connected to Describing patterns and sequences Imagine we have four bags containing numbers from a sequence. What numbers can we make now? Can you find the connections between linear and quadratic patterns? Play around with the Fibonacci sequence and discover some surprising results! Can you figure out how sequences of beach huts are generated? Surprising numerical patterns can be explained using algebra and diagrams... Just because a problem is impossible doesn't mean it's difficult... How many possible symmetrical necklaces can you find? How do you know you've found them all? What patterns can you make with a set of dominoes? How do you know if your set of dominoes is complete? Charlie has made a Magic V. Can you use his example to make some more? And how about Magic Ls, Ns and Ws?
{"url":"https://nrich.maths.org/tags/describing-patterns-and-sequences","timestamp":"2024-11-13T19:39:59Z","content_type":"text/html","content_length":"58986","record_id":"<urn:uuid:f37d0517-4e4e-4c49-a802-6366e8c2f903>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00415.warc.gz"}
Question ID - 53162 | SaraNextGen Top Answer A force of 200 N acts tangentially on the rim of a wheel 25 cm in radius. Find the torque a) 50 Nm b) 150 Nm c) 75 Nm d) 39 Nm A force of 200 N acts tangentially on the rim of a wheel 25 cm in radius. Find the torque Clearly, the question refers to the torque about an axis through the centre of wheel. Then, since the radius to the point application of the force is the lever or momentum arm.we have
{"url":"https://www.saranextgen.com/homeworkhelp/doubts.php?id=53162","timestamp":"2024-11-08T04:53:15Z","content_type":"text/html","content_length":"16122","record_id":"<urn:uuid:859f2342-082b-424c-b1d4-c89116b47091>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00532.warc.gz"}
1D maps The period-doubling accumulation point in unimodal map of 4-th power. For bimodal 1D maps - the limit of period-doubling on a special curve in the parameter plane defined by the condition "extremum is mapped to extremum". T-point appears as the terminal point of the Feigenbaum curve. This type of critical behavior is known after Chang, Wortis, Wright, and Fraser and Kapral. More general systems T-point may appear generically only in codimension 3. In some cases the pseudo-tricritical behavior may occur, as an intermediate asymptotics. RG equation The fixed point The orbital scaling factor Critical multiplier Relevant eigenvalues CoDim=3 (restr. 2) Codimension-2 example Param. space arrangement and scaling with factors Scaling coordinates show enlarged figure Codimension-3 example tricritical point at Codimension-3 example in 2D invertible map For D=0.3 the tricritical point is located at show enlarged figure
{"url":"https://sgtnd.narod.ru/science/alphabet/eng/doubling/t.htm","timestamp":"2024-11-06T13:37:24Z","content_type":"text/html","content_length":"7809","record_id":"<urn:uuid:2b47e615-77b9-44f6-8486-fc51e08c22ea>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00114.warc.gz"}
Focus | MathBait top of page Wanna build a Death Star? While a quadratic is a polynomial formation, it is also something much more powerful. Parabolas belong to a special class of relations called conic sections. It gives the parabola the ability to intensify and focus incoming rays with enough strength to blow up a planet. 🙀 Dive into Marco's mind as you explore The Eye and the terrifying monster under the floorboards... While quadratics are polynomials they also belong to a powerful group of relations called conic sections. I suppose the story goes that one day in ancient Greece somebody wondered about the shapes one could make when slicing a cone. They had no idea their findings would help Kepler describe planetary motion, or allow a doctor to dissolve a kidney stone without pain, or to create the Hubble Telescope–focusing rays of light to gather images of the universe. But alas, the exploration of conic sections has led to some amazing inventions. There are four shapes that result from slicing a cone. The mighty circle can be described as all the points equidistance from a given point. It's why to draw a circle you get a string of a certain length, tie one end to a pencil, and trace. A hyperbola is the set of all points whose distance from two fixed points is the same. These can be created with rational functions. Cool fact: lamps often cast hyperbolic shadows. A bit more complicated, an ellipse is all the points whose combined distance from two points is some value. To draw an ellipse make two fixed points (with a nail or pushpin) and tie a string between the two. Push the pencil against the string to create your ellipse! Ah, the parabola. This is the conic section we are interested in. While the others are all about the points, a parabola is defined as all the points the same distance from a point (the focus) and a line (the directrix). Try it Out! Slice the Cone Move the green point to see all the different ways a plane can slice a cone. Click on the bottom right icon to enter full screen mode ©MathBait created with GeoGebra To build a parabola, you simply need to pick your focal point and where to place your lever. Go ahead and give it a try! ©MathBait created with GeoGebra All the good details are already in the book, so we won't go too much in depth here. The key idea is that a parabolic formation acts as an amplifier of light, lasers, sound waves, and more! This makes it super powerful and we'll want to know how to harness this strength. The farther away your focus and lever are from each other, the wider the resulting formation, allowing more energy to Just like a circle is all the points the same distance (radius) from the center point, a parabola is all the points on the plane that are equidistant from the focal point and the directrix. You could think of this like a soccer game. The focus is the goal and the directrix is the line of scrimmage. It makes sense to disperse your players at points which are the same distance from both. If you do this, you will have created a parabolic formation. Finding the order If we know where we need our energy to focus, how can we find the order that will send our soldiers to the correct locations? Let x be any soldier and y be their location. We need to first find the distance from this post, (x,y), and the focal point of our formation. Since the focus aligns with the leader of the formation, –h, and is some distance, p, from the leader's location, the focus appears at (-h, k-p). We can use this to find the distance from any solider, (x,y), to our focal point. Now we need to find the distance to our lever. Since this is a vertical line, this distance is just the difference of each station. We know these two distances must be equal. So we make them equal! We can manipulate this equation into the order we are looking for. It looks horrible, but it turns out we have a lot of counterbalances on each side that will vanquish each other! Hey! That looks just like vertex-form of a parabolic formation: y=a(x+h)²+k. It turns out a=-1/4p where p is the distance from the vertex to the lever. This means, if I know the order I can find the focal point! Parabolic Art Parabolic Art All art is informed by mathematics. It may be ratios, scaling, perspective, but it all boils down to numbers and relationships. Try your hand at some parabolic art by moving the focus and directrix about the plane. Select new colors to build your masterpiece! ©MathBait created with GeoGebra "Without the terrifying layers of teeth, it reminded him of an eye. It was like he was controlling the pupil: as he pulled the directrix down it dilated, allowing more light to enter. When he brought the lever up impossibly close to the vertex, the pupil became tiny, an intimidating bouncer only letting select rays enter the party." ©MathBait created with GeoGebra "Marco imagined he was in Fredrick's factory. The floorboards split apart to reveal a colossal hole that housed a gigantic slimy monster. He couldn't even see the body of the beast as all that was visible was its jaws; rings and rings of teeth circling the dangling uvula. Looking to his left, he saw a lever attached to the wall. He pulled it down. The creature's jaw widened; mouth open to devour its meal. Pulling the lever up narrowed the opening. It became thinner and skinnier. Not in any danger, he taunted the beast. Up, down, up, down, open, closed, open, closed. The monster's jaw was snapping like the face exercises he'd seen singers and actors do when warming up. It exploded out of its hole revealing its long worm-like body. Gooey pink layers of chubby folds dripped as it focused its attention on Marco." Select the icon on the bottom right for full screen mode. ©MathBait created with GeoGebra bottom of page
{"url":"https://www.mathbait.com/focus","timestamp":"2024-11-03T14:12:13Z","content_type":"text/html","content_length":"1050486","record_id":"<urn:uuid:05f87902-a406-4739-8a19-d9716329aa0e>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00761.warc.gz"}
JEE Advanced 2014 Maths Question Paper-2 with Answer Keys - Free PDF Download JEE Advanced Maths Question Paper-2 Year 2014 with Answer Keys - Free PDF Download Free PDF Download of JEE Advanced 2014 Maths Question Paper-2 with Answer Keys on Vedantu is available. Practicing JEE Advanced Maths Question Paper-2 Year 2014 with Solutions will help students to score more marks in Joint Entrance Examination. JEE Advanced 2014 held the entrance exam on 19th January 2014 for the 2nd time since its inception on May 23, 2009. It was an extremely popular entrance exam because it was the first time that the number of seats was reduced from 15,300 to 6,300 (from 1,90,000 seats). More and more candidates appeared for the JEE 2014 exam than any other. FAQs on JEE Advanced 2014 Maths Question Paper-2 with Answer Keys 1. If two candidates have the same JEE Advanced aggregate marks will the 2 candidates be given the same rank? If two or more candidates have the same aggregate score, the following tie-break policy will be applied to determine rank. Candidates with higher positive scores will receive a higher ranking. If a candidate fails to appear for one paper, his mark sheet will show as A (Absent), but the final grade will be FAIL. They don't have a Dropout status to display. Thus, it is important that candidates must appear for both papers of the JEE Advanced 2014 to get a high rank in the exam. 2. What is the syllabus for maths for JEE Advanced 2014 paper-2? Here’s the list of topics - Algebra, Vector, Binomial Theorem, Complex Numbers, Coordinate Geometry, Circle, Parabola, Straight line, Coordinate Geometry (3-D), Differential Calculus, Application of Derivatives, Continuity & Derivability, Functions, Limit, Continuity & Differentiability, Integral Calculus, Definite integration, Differential Equation, Differential Equation, Permutation & Combination, Probability, Quadratic Equation, Sequence & Series, Trigonometry, Inverse Trigonometric function, Solution of Triangle, Trigonometric equation. It is important that students must understand the syllabus for maths JEE Advanced paper to score high marks in the exam. 3. Where can I find JEE Advanced Maths Question Paper-2 Year 2014 with Answer Keys? JEE Advanced Maths paper is quite different from earlier papers because of the introduction of different levels of difficulty. The candidates preparing for this exam have to be more focused to get an edge over the others and this needs to be done very carefully. The pattern of the paper will remain the same as that of JEE 2014 and the weightage of each question will be the same as that in JEE 2014. JEE Advanced exam is scheduled to be conducted on 5 May, 2014 (Thursday). The Question Paper along with the Answers Keys are available here on vedantu. 4. How can I get information about the JEE Advanced Maths Question paper -2 year 2014? Vedantu is a reliable source to get information about JEE Advanced coaching. For any queries related to JEE Advanced or other coaching for engineering entrance exams, visit http://www.vedantu.com or mail us at sales@vedantu.com. Students can find all information related to the JEE Advanced Maths Question paper -2 year 2014 easily to get an idea about the type of questions asked in the exam and the format of the Maths question paper for the JEE Advanced.
{"url":"https://www.vedantu.com/jee-advanced/2014-maths-question-paper-2","timestamp":"2024-11-11T11:54:13Z","content_type":"text/html","content_length":"206410","record_id":"<urn:uuid:d2d75482-9718-4310-918f-ac295147203e>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00202.warc.gz"}
sklearn.metrics.pairwise_distances(X, Y=None, metric='euclidean', n_jobs=None, force_all_finite=True, **kwds)[source]¶ Compute the distance matrix from a vector array X and optional Y. This method takes either a vector array or a distance matrix, and returns a distance matrix. If the input is a vector array, the distances are computed. If the input is a distances matrix, it is returned instead. This method provides a safe way to take a distance matrix as input, while preserving compatibility with many other algorithms that take a vector array. If Y is given (default is None), then the returned matrix is the pairwise distance between the arrays from both X and Y. Valid values for metric are: □ From scikit-learn: [‘cityblock’, ‘cosine’, ‘euclidean’, ‘l1’, ‘l2’, ‘manhattan’]. These metrics support sparse matrix inputs. [‘nan_euclidean’] but it does not yet support sparse matrices. □ From scipy.spatial.distance: [‘braycurtis’, ‘canberra’, ‘chebyshev’, ‘correlation’, ‘dice’, ‘hamming’, ‘jaccard’, ‘kulsinski’, ‘mahalanobis’, ‘minkowski’, ‘rogerstanimoto’, ‘russellrao’, ‘seuclidean’, ‘sokalmichener’, ‘sokalsneath’, ‘sqeuclidean’, ‘yule’] See the documentation for scipy.spatial.distance for details on these metrics. These metrics do not support sparse matrix Note that in the case of ‘cityblock’, ‘cosine’ and ‘euclidean’ (which are valid scipy.spatial.distance metrics), the scikit-learn implementation will be used, which is faster and has support for sparse matrices (except for ‘cityblock’). For a verbose description of the metrics from scikit-learn, see the __doc__ of the sklearn.pairwise.distance_metrics function. Read more in the User Guide. Xarray [n_samples_a, n_samples_a] if metric == “precomputed”, or, [n_samples_a, n_features] otherwise Array of pairwise distances between samples, or a feature array. Yarray [n_samples_b, n_features], optional An optional second feature array. Only allowed if metric != “precomputed”. metricstring, or callable The metric to use when calculating distance between instances in a feature array. If metric is a string, it must be one of the options allowed by scipy.spatial.distance.pdist for its metric parameter, or a metric listed in pairwise.PAIRWISE_DISTANCE_FUNCTIONS. If metric is “precomputed”, X is assumed to be a distance matrix. Alternatively, if metric is a callable function, it is called on each pair of instances (rows) and the resulting value recorded. The callable should take two arrays from X as input and return a value indicating the distance between them. n_jobsint or None, optional (default=None) The number of jobs to use for the computation. This works by breaking down the pairwise matrix into n_jobs even slices and computing them in parallel. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details. force_all_finiteboolean or ‘allow-nan’, (default=True) Whether to raise an error on np.inf and np.nan in array. The possibilities are: ○ True: Force all values of array to be finite. ○ False: accept both np.inf and np.nan in array. ○ ‘allow-nan’: accept only np.nan values in array. Values cannot be infinite. **kwdsoptional keyword parameters Any further parameters are passed directly to the distance function. If using a scipy.spatial.distance metric, the parameters are still metric dependent. See the scipy docs for usage Darray [n_samples_a, n_samples_a] or [n_samples_a, n_samples_b] A distance matrix D such that D_{i, j} is the distance between the ith and jth vectors of the given matrix X, if Y is None. If Y is not None, then D_{i, j} is the distance between the ith array from X and the jth array from Y. See also performs the same calculation as this function, but returns a generator of chunks of the distance matrix, in order to limit memory usage. Computes the distances between corresponding elements of two arrays Examples using sklearn.metrics.pairwise_distances¶
{"url":"https://scikit-learn.org/0.22/modules/generated/sklearn.metrics.pairwise_distances.html","timestamp":"2024-11-14T10:07:06Z","content_type":"text/html","content_length":"22705","record_id":"<urn:uuid:1d861029-0adb-47d3-8d7b-7077899996b2>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00543.warc.gz"}
JNTU-A B.TECH R20 1-2 Syllabus For Probability and statistics PDF 2022 January 29, 2022 2022-01-29 16:24 JNTU-A B.TECH R20 1-2 Syllabus For Probability and statistics PDF 2022 JNTU-A B.TECH R20 1-2 Syllabus For Probability and statistics PDF 2022 Get Complete Lecture Notes for Probability and statistics on Cynohub APP Download the APP Now! ( Click Here ) You will be able to find information about Probability and statistics along with its Course Objectives and Course outcomes and also a list of textbook and reference books in this blog.You will get to learn a lot of new stuff and resolve a lot of questions you may have regarding Probability and statistics after reading this blog. Probability and statistics has 5 units altogether and you will be able to find notes for every unit on the CynoHub app. Probability and statistics can be learnt easily as long as you have a well planned study schedule and practice all the previous question papers, which are also available on the CynoHub app. All of the Topic and subtopics related to Probability and statistics are mentioned below in detail. If you are having a hard time understanding Probability and statistics or any other Engineering Subject of any semester or year then please watch the video lectures on the official CynoHub app as it has detailed explanations of each and every topic making your engineering experience easy and Probability and statistics Unit One Descriptive statistics Statistics Introduction, Measures of Variability (dispersion) Skewness Kurtosis, correlation, correlation coefficient, rank correlation, principle of least squares, method of least squares, regression lines, regression coefficients and their properties. Probability and statistics Unit Two Probability, probability axioms, addition law and multiplicative law of probability, conditional probability, Baye’s theorem, random variables (discrete and continuous), probability density functions, Get Complete Lecture Notes for Probability and statistics on Cynohub APP Download the APP Now! ( Click Here ) Probability and statistics Unit Three Probability distributions Discrete distribution – Binomial, Poisson approximation to the binomial distribution and their properties. Continuous distribution: normal distribution and their properties. Probability and statistics Unit Four Estimation and Testing of hypothesis, large sample tests Estimation-parameters, statistics, sampling distribution, point estimation, Formulation of null hypothesis, alternative hypothesis, the critical and acceptance regions, level of significance, two types of errors and power of the test. Large Sample Tests: Test for single proportion, difference of proportions, test for single mean and difference of means. Confidence interval for parameters in one sample and two sample problems Probability and statistics Unit Five Small sample tests Student t-distribution (test for single mean, two means and paired t-test), testing of equality of variances (F-test), χ2 – test for goodness of fit, χ2 – test for independence of attributes. Probability and statistics Course Objectives To familiarize the students with the foundations of probability and statistical methods To impart probability concepts and statistical methods in various applications Engineering Probability and statistics Course Outcomes Upon successful completion of this course, the student should be able to ● Make use of the concepts of probability and their applications (L3) ● Apply discrete and continuous probability distributions (L3) ● Classify the concepts of data science and its importance (L4) ● Interpret the association of characteristics and through correlation and regression tools ● Design the components of a classical hypothesis test (L6) ● Infer the statistical inferential methods based on small and large sampling tests Probability and statistics Text Books Miller and Freunds, Probability and Statistics for Engineers,7/e, Pearson, 2008. 2. S.C. Gupta and V.K. Kapoor, Fundamentals of Mathematical Statistics, 11/e, Sultan Chand & Sons Publications, 2012. Probability and statistics Reference Books S. Ross, a First Course in Probability, Pearson Education India, 2002. 2. W. Feller, an Introduction to Probability Theory and its Applications, 1/e, Wiley, 1968. 3. Peyton Z. Peebles ,Probability, Random Variables & Random Signal Principles -, McGraw Hill Education, 4th Edition, 2001. Scoring Marks in Probability and statistics Scoring a really good grade in Probability and statistics is a difficult task indeed and CynoHub is here to help!. Please watch the video below and find out how to get 1st rank in your B.tech examinations . This video will also inform students on how to score high grades in Probability and statistics. There are a lot of reasons for getting a bad score in your Probability and statistics exam and this video will help you rectify your mistakes and help you improve your grades. Information about JNTU-A B.Tech R20 Probability and statistics was provided in detail in this article. To know more about the syllabus of other Engineering Subjects of JNTUH check out the official CynoHub application. Click below to download the CynoHub application. Get Complete Lecture Notes for Probability and statistics on Cynohub APP Download the APP Now! ( Click Here )
{"url":"https://www.cynohub.com/jntu-a-b-tech-r20-1-2-syllabus-for-probability-and-statistics-pdf-2022/","timestamp":"2024-11-12T15:43:55Z","content_type":"text/html","content_length":"136149","record_id":"<urn:uuid:0b6e2656-019d-4bac-9d4a-a00b64613270>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00109.warc.gz"}
A polygon is a closed plane figure formed by at least three straight sides. Polygons can be classified in many ways. A regular polygon is a convex polygon that is both equilateral (all sides congruent) and equiangular (all angles congruent). Consider the quadrilateral shown: 1. What ways could you break the quadrilateral into the least number of triangles? How many triangles does this create? 2. Determine the sum of the interior angles of the quadrilateral. 3. Draw a hexagon and determine the least number of triangles you could break the polygon into, and determine the sum of its interior angles. 4. What can you say about the relationship between the number of sides on any polygon, the number of triangles it can be divided into, and the sum of its interior angles? The sum of the interior angle measures of a polygon depends on the number of sides of the polygon. A polygon with n sides (or an n-gon) can always be divided into (n-2) non-overlapping triangles. This fact and the triangle angle sum theorem helps us calculate interior angle sums and individual angle measures of regular polygons.
{"url":"https://mathspace.co/textbooks/syllabuses/Syllabus-1190/topics/Topic-22480/subtopics/Subtopic-285875/?ref=blog.mathspace.co","timestamp":"2024-11-05T07:09:34Z","content_type":"text/html","content_length":"434022","record_id":"<urn:uuid:302d183f-7d56-4f10-8a7b-8c37299118f3>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00049.warc.gz"}
US Math Competitions | Russian Math Tutors The contest has demonstrated that children enjoy solving fun and exciting problems within their reach. This bridges the gap between the standard and often dull examples and difficulties in school textbooks and the challenging and demanding problems of more advanced Math Olympiads. The contest's primary purpose is to engage as many children as possible in solving mathematical problems and show each student that problem-solving can be lively, exciting, and entertaining. This purpose has been achieved successfully: in 2023, over 6 million children from more than 70 countries participated in the contest. All students from grades 1-12 of general educational institutions who have paid the registration fee can participate in the contest without any prior selection. (Like in other countries, the contest is not free, but the cost has been meager in recent years - about $21. This money covers the expenses of the competition and rewards many participants with small but varied prizes). The contest occurs in all schools on the same day, the third Thursday in March. But during the pandemic years, some schools had the test on a different day. Please check the Math Kangaroo website for the exact schedule. When choosing problems, two principles are followed: firstly, solving problems should be fun, and secondly, "Kangaroo" is, although not very hard, still a contest, so the most skilled and prepared should win. All problems in the Olympiad are split into three categories, with ten issues in each (for the younger ones, the last, most challenging section has only six issues). The first section has easy, often funny problems, each worth 3 points. These problems are chosen so contestants can solve at least one and have fun. They are within the reach of anyone who reads the condition carefully and does not need special training. But even in them, there are surprising questions and even cunning "traps," so it cannot be said that they are accessible to the contestants. The problems worth 4 points are meant for high school students and "good students" to solve independently. These problems are much more complex than the 3-point ones and, usually, are closer to the school curriculum. The last section has complex, non-standard problems, each worth 5 points. They are made so that even the most prepared students have something to think about. One must show cleverness, the ability to reason independently, and observation to solve them. The Noetic Learning Math Contest (NLMC) is a semiannual problem-solving competition for elementary and middle school students. It aims to enhance students' mathematical skills and foster their interest in math by presenting them with engaging problems to solve independently within a given time frame. During the contest, participants are challenged to solve 20 creative problems without a calculator, with 45 minutes for the paper-pencil format and 50 minutes for the online version. For the 2023-2024 school year, they will continue offering both paper-pencil and online contest versions. Their Approach: • Grade-level Tailored Problems: Unique problems are designed for each grade level, ensuring that students face challenges appropriate to their skill level. • Broad Recognition: Comprehensive recognition is provided to participants, with the top 10% receiving National Honor Roll medals, the top 50% receiving Honorable Mention ribbons, and each team's top scorer earning a Team Winner medal. Recognition and Awards: • Team Winner Medal: Awarded to the highest scorer of each team. • National Honor Roll Medal: Given to the top 10% of all participants nationwide. • National Honorable Mention Ribbon: Awarded to the top 50% of all participants nationwide. • Team Achievement Plaque: Presented to the top 10% of teams (excluding after-school institution teams). Looking for the best math competition tutor online? Look no further, we are here to help: Private Coaching | Group Coaching The Mathematical Olympiads for Elementary and Middle Schools (moems.org) is a math competition program for grades 4 through 8 students. It is designed to stimulate enthusiasm and a love for mathematics, encourage mathematical creativity, and develop problem-solving skills. Here are some key points about the MOEMS contest: 1. Format: MOEMS consists of five monthly contests from November to March. Each contest consists of five non-routine problems. 2. Problem Types: The problems presented in MOEMS are designed to be challenging and require creative problem-solving. They cover various mathematical topics and may involve logical reasoning and critical thinking. 3. Scoring: Students earn points for correct answers, and their scores contribute to both individual and team rankings. Recognition is given at the local, regional, and national levels. 4. Division E and Division M: MOEMS has two divisions for elementary students (grades 4 and 5) and middle school students (grades 6, 7, and 8). 5. Math Club Participation: Many schools participate in MOEMS through math clubs, where students work collaboratively to solve problems. Math clubs can enhance students' problem-solving skills and foster a sense of camaraderie. 6. Certificates and Awards: Participants receive certificates based on their performance, and high achievers may be eligible for additional awards and recognition. 7. Purpose: MOEMS aims to allow students to solve problems creatively beyond the regular classroom curriculum. It helps nurture a positive attitude toward mathematics and fosters a sense of Overall, MOEMS is a popular math contest that encourages students to explore the beauty and depth of mathematics while honing their problem-solving abilities. Mathcounts (mathcounts.org) is a prestigious mathematics competition program in the United States aimed at middle school students. Here are key points about the Mathcounts contest: 1. Participants: Mathcounts is designed for students in grades 6-8, typically covering ages 11-14. It is a middle school-focused competition. 2. Format: The Mathcounts competition consists of several rounds, including the School, Chapter, State, and National rounds. The contest format involves both individual and team competitions. 3. Individual Round: Participants solve math problems individually, testing their problem-solving skills and mathematical knowledge. 4. Team Round: Teams of up to four students collaborate to solve more complex problems. This round encourages teamwork and effective communication. 5. Countdown Round: The top-scoring students from the written rounds engage in a "Countdown Round," a fast-paced oral competition where questions are presented to individuals who must respond 6. Sprint and Target Rounds: These written rounds challenge participants with various mathematical problems. The Sprint Round consists of 30 rapid-fire questions, while the Target Round features more in-depth issues solved in pairs. 7. Competition Levels: Mathcounts start at the school level, progressing to Chapter, State, and finally, the National Competition, where the most successful students from each state compete. 8. Math Club Involvement: Many schools have Mathcounts clubs or teams that prepare for the competition throughout the academic year. Dedicated coaches often guide students in honing their problem-solving skills. 9. National Math Club: Besides the competition, Mathcounts provides resources for a National Math Club, encouraging students to engage in mathematical activities and challenges beyond the official 10. Purpose: Mathcounts promotes math excellence, teamwork, and enthusiasm among middle school students. It provides an opportunity for talented students to showcase their mathematical abilities. Overall, Mathcounts is a well-regarded math competition program that has contributed to the development of countless students interested in mathematics across the United States. Looking for the best math competition tutor online? Look no further, we are here to help: Private Coaching | Group Coaching The USA Mathematical Talent Search ( ) is a yearly, free, three-round math competition nationwide for middle and high school students. Unlike other contests, it emphasizes problem-solving over speed, giving participants at least a month to solve five problems for each round. Justifications are required for all but the first problem, which is a puzzle. The USAMTS aims to develop problem-solving and writing skills while providing a pathway to the International Mathematical Olympiad. Students who score 68 or above (out of 75) on the USAMTS are eligible to participate in the AIME, the second stage in the selection process for the team representing the USA at the IMO. AMC ( ) - American Mathematics Contest - is an examination series that builds problem-solving skills and mathematical knowledge in middle and high school students. The AMC is organized by the Mathematical Association of America (MAA). The AMC series consists of several different exams based on grade level: 1. AMC 8: For students in grade 8 and below. It's a 25-question, 40-minute multiple-choice examination. 2. AMC 10: For students in grade 10 and below. It's a 25-question, 75-minute multiple-choice examination. 3. AMC 12: For students in grade 12 and below. Like the AMC 10, it's a 25-question, 75-minute multiple-choice examination. The primary purpose of these competitions is to awaken interest in mathematics and develop talent. It includes all the significant topics from the curriculum: Algebra, Number theory, Geometry, and Combinatorics. Participation in the AMC can lead to more opportunities, such as being invited to the American Invitational Mathematics Examination (AIME) and the USA Mathematical Olympiad (USAMO), depending on the scores achieved. Moreover, it helps develop competitiveness and test-taking skills, allowing students to learn throughout their careers. The American Invitational Mathematics Examination ( ), often AIME, is an advanced mathematics competition primarily for high school students. It represents a critical step in the progression of mathematically talented students in the United States, bridging the gap between the American Mathematics Contest (AMC) and the most elite level of mathematical challenges, such as the USA Mathematical Olympiad and USA Junior Mathematical Olympiad(USAMO/ AIME is a distinctive contest, primarily because it targets students with top performance in either AMC 10 or AMC 12. The examination format is also different: It consists of 15 questions, with each answer being an integer in the range of 0 to 999. Candidates are given 3 hours to tackle these problems, reflecting the complexity and depth of the mathematical thinking required. What truly sets AIME apart is its focus on original and innovative problem-solving skills and applying sophisticated mathematical concepts. The problems are designed to encourage out-of-the-box thinking and often require a synthesis of multiple areas of mathematics. This approach tests and nurtures a deep and comprehensive understanding of mathematical principles. Excelling in this examination can lead to opportunities such as an invitation to the prestigious USAMO/USAJMO, and it is often considered a marker of high potential and achievement in mathematics. The United States of America Mathematical Olympiad (maa.org), commonly known as the USAMO/USAJMO, is a highly prestigious mathematics competition in the United States, primarily aimed at high school students. This Olympiad marks the pinnacle of high school mathematics competitions in the U.S. and serves as a gateway to international mathematical arenas, such as the International Mathematical Olympiad (IMO). Looking for the best math competition tutor online? Look no further, we are here to help: Private Coaching | Group Coaching The USAMO/USAJMO is an invitation-only competition for those who achieve outstanding scores in the American Mathematics Competitions (AMC) and the American Invitational Mathematics Examination (AIME). This rigorous selection process ensures that only the most talented and skilled young mathematicians participate in the Olympiad. Those who perform exceptionally well in the AMC12 are invited to the USAMO, while top scorers in the AMC10 are invited to the USAJMO. The format of the USAMO/USAJMO is uniquely challenging: it consists of a two-day examination comprising three extensive, proof-based mathematical problems. These problems require mathematical knowledge and deep creativity, ingenuity, and advanced problem-solving skills. Each day's exam lasts four and a half hours, highlighting the complexity and depth of the problems posed. Success in the USAMO/USAJMO is a significant achievement, often regarded as an indicator of exceptional mathematical talent and potential. High performers in the USAMO/USAJMO are usually candidates for the U.S. team in the International Mathematical Olympiad, where they represent the country on a global stage. The highest achievers in the USAMO and USAJMO are offered invitations to join the Mathematical Olympiad Program (MOP) during the summer immediately following the competitions. Those participating in MOP become eligible for selection to the six-member team representing the United States of America at the International Mathematical Olympiad the following summer. The International Mathematical Olympiad (IMO-official.org), commonly called the IMO, is the world's most prestigious mathematics competition for high school students. Held annually in a different country, it represents the pinnacle of mathematical problem-solving at the pre-university level. It brings together the most brilliant young minds from around the globe. The IMO is an invitation-only competition, with each participating country selecting and training a team of up to six students through national competitions and rigorous preparation. This ensures that only the most exceptional young mathematicians, capable of representing their country on the international stage, participate. The format of IMO is practically the same as USAMO, but the problems themselves differ, of course, in complexity. Each participant has a chance to receive a medal according to their performance: 1. Core Thresholds: Each participant's score is calculated out of the maximum possible total, which is determined by the points available from all the problems in the competition. The maximum score is usually 42 points (each of the six issues is worth 7 points). 2. Medal Allocation: The allocation of gold, silver, and bronze medals is based on the distribution of scores among all participants. The basic guideline is: • Gold Medals: Awarded to approximately the top 1/12 of the contestants. • Silver Medals: These are given to approximately the next 1/6 of the contestants. • Bronze Medals: Allocated to approximately the next 1/4 of the contestants. 3. No Fixed Score Thresholds: Unlike some competitions, the IMO does not have fixed score thresholds for each type of medal. Instead, the gold, silver, and bronze thresholds are determined by the ratios above and the distribution of that year's scores. 4. Honorable Mentions: Participants who do not win a medal but solve at least one problem completely (earning 7 points for a problem) typically receive an honorable mention. 5. Team Performance vs. Individual Awards: It's important to note that medals are awarded based on individual performance, not team performance. Each country's team comprises up to six students, but their scores are considered independently when determining medal eligibility. Participation in the IMO is more than a competition; it's an unparalleled opportunity for young mathematicians to engage with complex mathematical challenges, showcase their talents on an international stage, and be part of a global community that shares a deep passion for mathematics. The experience of competing in the IMO often has a profound and lasting impact on the participants, fostering a lifelong love for mathematics and problem-solving. The William Lowell Putnam Mathematical Competition® is the top math contest for undergraduates in the U.S. and Canada.( ) It happens yearly on the first Saturday of December, and it features two 3-hour sessions, each with six challenging math problems for individual participants to solve. Past year papers for this competition can be found Looking for the best math competition tutor online? Look no further, we are here to help: Private Coaching | Group Coaching
{"url":"https://russianmathtutors.com/math-contests-in-us","timestamp":"2024-11-12T20:38:08Z","content_type":"text/html","content_length":"150879","record_id":"<urn:uuid:3a86b2ab-5676-4874-9861-50abef1ded08>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00445.warc.gz"}
Number Line Drawer Number Line Drawer - Do you have a graph that you're proud of? Web number line is an online mathematical manipulative that helps students develop greater flexibility in mental arithmetic as they actively construct mathematical meaning, number sense, and understandings of number relationships. Steps to add/subtract on a number line: Type in where the number line should begin, where it should end and hit the button! Tap the circle under the page that you want to hide. If it is taller than it is wide, fx draw will create a vertical number line. This line extends indefinitely at both ends. Fortunately, the number line is easy to understand. Lunch john borrowed $3 to pay for his lunch Web number line is an online mathematical manipulative that helps students develop greater flexibility in mental arithmetic as they actively construct mathematical meaning, number sense, and understandings of number relationships. Draw a number line with zero as the centre point. Also, projector mode is suggested before taking the screenshot. Number Line Printable 1100 We move right to add, move left to subtract. Fortunately, the number line is easy to understand. Values may be whole numbers, negative numbers or decimals. Choose a scale depending on the given number. Available to print and use today. Web how to draw a number line? In order to provide flexibility in creating number. Math Number Lines CommonCore Resources For Kids Find more mathematics widgets in wolfram|alpha. The result can be saved as an image for use with word and powerpoint, etc. To add a graph, right click on the number line. Finding midpoint & endpoint in the coordinate plane (v2) solving proportions. Each mark represents a unit. (you can also draw a vertical number line). Free Printable Number Line Web generate custom, printable number lines to help kids learn to count, add, and subtract integers. Move as many steps as the second number to the right. Web basics interactive we must understand the number line. Touch and hold an empty area on your home screen. The generator will attempt to. We move right to. Number Line 0 to 20 within Guide Lines (020 numberline) Printable Web use the switches to set the ends of your line graph, and use the sliders to set the values for your inequality. Adding and subtracting on number lines. Graph functions, plot points, visualize algebraic equations, add sliders, animate graphs, and more. Draw a horizontal line with arrows on both ends. We'd love to see. Number Line 0 to 20 within Guide Lines (020 numberline) Printable What is the number line? To unhide a page, repeat the steps above. This line extends indefinitely at both ends. Fortunately, the number line is easy to understand. Web to add/subtract numbers a number line can also be used to carry out addition and subtraction. Web here's how to hide a page: Web interactive, free. Number Lines Have Fun Teaching Web generate custom, printable number lines to help kids learn to count, add, and subtract integers. Web in maths, number lines are the horizontal straight lines in which the integers are placed in equal intervals. Web 9.8k views 2 years ago. Welcome to the number lines worksheets page where jumping around is encouraged. If the. Best Templates Printable Number Line To 20 Lunch john borrowed $3 to pay for his lunch Web use the switches to set the ends of your line graph, and use the sliders to set the values for your inequality. The number line number line start number line end increment numbers by how should the endings look? In order to provide flexibility in. counting and draw line to the correct number Kindergarten math Web basics interactive we must understand the number line. Move as many steps as the second number to the right. Find more mathematics widgets in wolfram|alpha. The number line is counting Get the free number line plotter widget for your website, blog, wordpress, blogger, or igoogle. Lunch john borrowed $3 to pay for his lunch. 30 Printable Number Line Designs from 0 to 10 Teaching Resources Also, projector mode is suggested before taking the screenshot. Web simple tool for drawing number lines. Draw a number line with zero as the centre point. Utilise our number line maker. Web generate custom, printable number lines to help kids learn to count, add, and subtract integers. The generator will attempt to. Web use the. Printable Number Line Positive and Negative numbers Steps to add/subtract on a number line: Adding and subtracting on number lines. This line extends indefinitely at both ends. We move right to add, move left to subtract. What is the number line? Welcome to the number lines worksheets page where jumping around is encouraged. Just change the sliders to the desired values. The. Number Line Drawer Each mark represents a unit. Web to draw a number line, you sweep out a rectangle. Customise the minimum and maximum values and how many partitions. Web interactive, free online graphing calculator from geogebra: To unhide a page, repeat the steps above. Customize The Range And Intervals, Download, And Print. Draw a number line with zero as the centre point. Web use the switches to set the ends of your line graph, and use the sliders to set the values for your inequality. Web −1 is less than 1 −8 is less than −5 a number on the right is greater than a number on the left. Web our free tool lets you create number line graphs directly in your browser! Then Use The App Library To Quickly Find Apps Hidden On Different Pages. Type a starting and ending value into the two provided input boxes (0 and 10 are default). Web how to draw a number line? Graph functions, plot data, drag sliders, and much more! A line that has evenly spaced marks. Adding And Subtracting On Number Lines. Numbers to the right are greater than those on the left. The generator will attempt to. Move as many steps as the second number to the right. In the above figure, we can see a number line where integers are placed. Arrow Ending Discrete Layout And Appearance Width Label The Number Line Label. Finding midpoint & endpoint in the coordinate plane (v2) solving proportions. These values can be positive or negative, integers or decimals. You can also try the zoomable number line. What is the number line? Number Line Drawer Related Post :
{"url":"https://sandbox.independent.com/view/number-line-drawer.html","timestamp":"2024-11-11T05:21:55Z","content_type":"application/xhtml+xml","content_length":"23767","record_id":"<urn:uuid:2724922e-9722-420f-a8af-16235a2f01ee>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00634.warc.gz"}
7.1 The Central Limit Theorem for Sample Means (Averages) Suppose X is a random variable with a distribution that may be known or unknown (it can be any distribution). Using a subscript that matches the random variable, suppose a. $μ x μ x$ = the mean of X b. $σ x σ x$ = the standard deviation of X If you draw random samples of size n, then as n increases, the random variable $X ¯ X ¯$, which consists of sample means, tends to be normally distributed and $X ¯ ∼N( μ x , σ x n ) X ¯ ∼N( μ x , σ x n )$ The central limit theorem for sample means says that if you keep drawing larger and larger samples (such as rolling one, two, five, and finally, ten dice) and calculating their means, the sample means form their own normal distribution (the sampling distribution). The normal distribution has the same mean as the original distribution and a variance that equals the original variance divided by the sample size. The variable n is the number of values that are averaged together, not the number of times the experiment is done. To put it more formally, if you draw random samples of size n, the distribution of the random variable $X ¯ X ¯$, which consists of sample means, is called the sampling distribution of the mean. The sampling distribution of the mean approaches a normal distribution as n, the sample size, increases. The random variable $X ¯ X ¯$ has a different z-score associated with it from that of the random variable X. The mean $x ¯ x ¯$ is the value of $X ¯ X ¯$ in one sample. $z= x ¯ − μ x ( σ x n ) , z= x ¯ − μ x ( σ x n ) ,$ μ[X] is the average of both X and $X ¯ X ¯$. $σ x ¯ = σx n σ x ¯ = σx n$ = standard deviation of $X ¯ X ¯$ and is called the standard error of the mean. Using the TI-83, 83+, 84, 84+ Calculator To find probabilities for means on the calculator, follow these steps: 2nd DISTR $normalcdf( lower value of the area, upper value of the area, mean, standard deviation sample size ) normalcdf( lower value of the area, upper value of the area, mean, standard deviation sample size • mean is the mean of the original distribution, • standard deviation is the standard deviation of the original distribution, and • sample size = n. Example 7.1 A distribution has a mean of 90 and a standard deviation of 15. Samples of size n = 25 are drawn randomly from the population. a. Find the probability that the sample mean is between 85 and 92. Solution 7.1 a. Let X = one value from the original unknown population. The probability question asks you to find a probability for the sample mean. Let $X ¯ X ¯$ = the mean of a sample of size 25. Because μ[x] = 90, σ[x] = 15, and n = 25, $X ¯ ∼N( μ x , σ x n ) X ¯ ∼N( μ x , σ x n )$ Find P(85 $x ¯ x ¯$ 92). Draw a graph. P(85 $x ¯ x ¯$ 92) = 0.6997 The probability that the sample mean is between 85 and 92 is 0.6997. Find P(85 $x ¯ x ¯$ 92). Draw a graph. $P(85 x ¯ 92) = 0.6997 P(85 x ¯ 92) = 0.6997$ Using the TI-83, 83+, 84, 84+ Calculator normalcdf(lower value, upper value, mean, standard error of the mean) The parameter list is abbreviated (lower value, upper value, μ, $σ n σ n$). normalcdf(85,92,90,$15 25 ) = 0.6997 15 25 ) = 0.6997$ b. Find the value that is two standard deviations above the expected value, 90, of the sample mean. Solution 7.1 b. To find the value that is two standard deviations above the expected value 90, use the following formula: $value = µ x + (#ofTSDEVs)( σ x n ) value = µ x + (#ofTSDEVs)( σ x n )$ $value = 90 + 2 ( 15 25 )=96. value = 90 + 2 ( 15 25 )=96.$ The value that is two standard deviations above the expected value is 96. The standard error of the mean is $σx n σx n$ = $15 25 15 25$ = 3. Recall that the standard error of the mean is a description of how far (on average) that the sample mean will be from the population mean in repeated simple random samples of size n. Try It 7.1 An unknown distribution has a mean of 45 and a standard deviation of eight. Samples of size n = 30 are drawn randomly from the population. Find the probability that the sample mean is between 42 and Example 7.2 The length of time, in hours, it takes a group of people, 40 years and older, to play one soccer match is normally distributed with a mean of 2 hours and a standard deviation of 0.5 hours. A sample of size n = 50 is drawn randomly from the population. Find the probability that the sample mean is between 1.8 hours and 2.3 hours. Solution 7.2 Let X = the time, in hours, it takes to play one soccer match. The probability question asks you to find a probability for the sample mean time, in hours, it takes to play one soccer match. Let $X ¯ X ¯$ = the mean time, in hours, it takes to play one soccer match. If μ[X] = _________, σ[X] = __________, and n = ___________, then X ~ N(______, ______) by the central limit theorem for means. μ[X] = 2, σ[X] = 0.5, n = 50, and X ~ N$( 2, 0.5 50 ) ( 2, 0.5 50 )$ Find P(1.8 $x ¯ x ¯$ 2.3). Draw a graph. $P(1.8 x ¯ 2.3) = 0.9977 P(1.8 x ¯ 2.3) = 0.9977$ $( 1.8,2.3,2, .5 50 ) = 0.9977 ( 1.8,2.3,2, .5 50 ) = 0.9977$ The probability that the mean time is between 1.8 hours and 2.3 hours is 0.9977. Try It 7.2 The length of time taken on the SAT exam for a group of students is normally distributed with a mean of 2.5 hours and a standard deviation of 0.25 hours. A sample size of n = 60 is drawn randomly from the population. Find the probability that the sample mean is between two hours and three hours. Using the TI-83, 83+, 84, 84+ Calculator To find percentiles for means on the calculator, follow these steps: 2^nd DIStR $k = invNorm( area to the left of k, mean, standard deviation sample size ) k = invNorm( area to the left of k, mean, standard deviation sample size )$ • k = the k^th percentile • mean is the mean of the original distribution • standard deviation is the standard deviation of the original distribution • sample size = n Example 7.3 In a recent study reported Oct. 29, 2012, the mean age of tablet users is 34 years. Suppose the standard deviation is 15 years. Take a sample of size n = 100. a. What are the mean and standard deviation for the sample mean ages of tablet users? b. What does the distribution look like? c. Find the probability that the sample mean age is more than 30 years (the reported mean age of tablet users in this particular study). d. Find the 95^th percentile for the sample mean age (to one decimal place). Solution 7.3 a. Because the sample mean tends to target the population mean, we have μ[χ] = μ = 34. The sample standard deviation is given by $σ χ σ χ$ = $σ n σ n$ = $15 100 15 100$ = $15 10 15 10$ = 1.5. b. The central limit theorem states that for large sample sizes (n), the sampling distribution will be approximately normal. c. The probability that the sample mean age is more than 30 is given by P(Χ > 30) = normalcdf(30,E99,34,1.5) = 0.9962. d. Let k = the 95^th percentile. k = invNorm$( 0.95,34, 15 100 ) ( 0.95,34, 15 100 )$ = 36.5 Try It 7.3 A gaming marketing gap for men between the ages of 30 to 40 has been identified. You are researching a startup game targeted at the 35-year-old demographic. Your idea is to develop a strategy game that can be played by men from their late 20s through their late 30s. Based on the article’s data, industry research shows that the average strategy player is 28 years old with a standard deviation of 4.8 years. You take a sample of 100 randomly selected gamers. If your target market is 29- to 35-year-olds, should you continue with your development strategy? Example 7.4 The mean number of minutes for app engagement by a tablet user is 8.2 minutes. Suppose the standard deviation is one minute. Take a sample of 60. a. What are the mean and standard deviation for the sample mean number of app engagement minutes by a tablet user? b. What is the standard error of the mean? c. Find the 90^th percentile for the sample mean time for app engagement for a tablet user. Interpret this value in a complete sentence. d. Find the probability that the sample mean is between eight minutes and 8.5 minutes. Solution 7.4 a. This allows us to calculate the probability of sample means of a particular distance from the mean, in repeated samples of size 60. b. Let k = the 90^th percentile. k = invNorm$( 0.90,8.2, 1 60 ) ( 0.90,8.2, 1 60 )$ = 8.37. This values indicates that 90 percent of the average app engagement time for table users is less than 8.37 minutes. c. P(8 $x ¯ x ¯$ 8.5) = normalcdf$( 8,8.5,8.2, 1 60 ) ( 8,8.5,8.2, 1 60 )$ = 0.9293 Try It 7.4 Cans of a cola beverage claim to contain 16 ounces. The amounts in a sample are measured and the statistics are n = 34, $x ¯ x ¯$ = 16.01 ounces. If the cans are filled so that μ = 16.00 ounces (as labeled) and σ = 0.143 ounces, find the probability that a sample of 34 cans will have an average amount greater than 16.01 ounces. Do the results suggest that cans are filled with an amount greater than 16 ounces?
{"url":"https://texasgateway.org/resource/71-central-limit-theorem-sample-means-averages?book=79081&binder_id=78246","timestamp":"2024-11-11T06:38:06Z","content_type":"text/html","content_length":"75621","record_id":"<urn:uuid:1b414747-31f2-4429-b395-06ee583bfc55>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00465.warc.gz"}
Grade 6 Term 2 Mathematics and Language Arts - EasyA Club Grade 6 Term 2 Mathematics and Language Arts What Will You Learn? • At the end pf the term, students should be able to: • Mathematics • List all prime factors of a given number. • Write a composite number as a product of: (a) Primes (b)Primes in exponential form. • Identify the Highest Common Factor (H.C.F) of two numbers. • Differentiate between multiples and factors. • Use substitution in formulae, algebraic sentences and inequalities in problem solving. • Use operational symbols to complete number sentences. • Substitute a number for a variable in a mathematical sentence with up to two variables. • Identify shapes that will cover a plane exactly and those that will not. • Differentiate between the size and use of the following units: square centimetre, square metre, hectare and square kilometre. • Name and measure regions, compute the area of regions shaped as rectangles and right-angled triangles individually, in combination or as the surfaces or three-dimensional objects. • Solve problems involving area measures. • Identify and count the number of lines of symmetry in compound plane figures. • Describe congruence in plane and solid shapes. • Distinguish between similar and congruent figures (triangles and quadrilaterals). • Develop the idea of a ‘unit solid’. • Calculate the volume of a rectangular prism when given the number of unit solids in one layer and the number of layers. • Investigate and use the formula for the volume of a rectangular prism to solve problems. • Identify the reciprocal of a whole number or fractional number. • Use the four basic operations to compute with fractional numbers. • Use ratio to compute quantities. • Write a ratio to compare the number of items in two sets or two parts of a single set. • Write a ratio using the formats 1:5, 1 to 5, or 1/5. • Write equivalent ratios for a given ratio. • Solve problems which require the use of equivalent ratios. • Apply the concept of ratio to percentage forms and use the symbol % correctly. • Tell what percentage of a set or object is shown. • Write percentage as a fraction with a denominator of 100, or, in its simplest form and/or as a decimal. • Solve problems requiring the conversion of fractions to percentages and vice versa. • Recognize that 100% is a whole. • Express one number as a percentage of another number that is a multiple of 10. (Measurement and money may be used.) • Calculate a given percentage of a number, amount of money, measure of mass, capacity etc. • Calculate the entire amount when the percentage of a number is known. (Multiples of 5) • Solve problems requiring the use of percentages. • Explore how a coordinate system identifies a location and uses the first quadrant of Cartesian plane to plot points. • Divide a fraction, mixed number or decimal fraction by a whole number. • Divide a whole number by a fractional number. • Divide a decimal fraction by a power of 10. • Divide a decimal fraction by another decimal fraction to two or three places of decimal. • Solve problems involving the division of fractional numbers. • Compute with whole numbers, common and decimal fractions using the four operations. • Language Arts • Generate and answer questions from implicit and explicit information viewed. • Share interpretations of words used in context. • Use analogies and other word relationships, including synonyms and antonyms, to determine the meaning. • Compare and contrast setting and plot in different stories read. • Distinguish facts from opinions. • Distinguish between declarative, exclamatory, and interrogative sentences. • Practice using various tenses: present, past, future, continuous, past perfect). • Use singular and plural nouns in different contexts. • Construct questions using interrogative pronouns within the appropriate context. • Use demonstrative pronouns appropriately in written and oral sentences. • Apply stages of the writing process in producing a range of written pieces. • Write with increasing awareness of story elements. • Use persuasive language to convince the reader. • Compose business letters using appropriate lay out of text and content. • Use transitional words to write in sequence and order. • Begin to organize information located from various sources. • Organize information located from various sources. Courses in the Bundle (2)
{"url":"https://easyaclub.com/course-bundle/grade-6-term-2-mathematics-and-language-arts/","timestamp":"2024-11-10T19:03:25Z","content_type":"text/html","content_length":"161416","record_id":"<urn:uuid:a54345b8-fc6e-4bf5-9412-3236bafb8a47>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00820.warc.gz"}
[Solved] The sum of the focal distances of any point on the ell... | Filo The sum of the focal distances of any point on the ellipse 9x^2 + 16y^2 = 144 is Not the question you're searching for? + Ask your question The sum of the focal distances of any point on an ellipse is constant and equal to the length of the major axis of the ellipse. i . e . sum of the focal distances of any point on an ellipse = 2a Was this solution helpful? Found 7 tutors discussing this question Discuss this question LIVE 5 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Practice questions from Maths XI (RD Sharma) View more Practice more questions from Conic Sections Practice questions on similar concepts asked by Filo students View more Stuck on the question or explanation? Connect with our Mathematics tutors online and get step by step solution of this question. 231 students are taking LIVE classes The sum of the focal distances of any point on the ellipse 9x + 16y Question Text = 144 is • 32 • 18 • 16 • 8 Topic Conic Sections Subject Mathematics Class Class 11 Answer Type Text solution:1 Upvotes 31
{"url":"https://askfilo.com/math-question-answers/the-sum-of-the-focal-distances-of-any-point-on-the-ellipse-9x2-16y2-144","timestamp":"2024-11-14T00:33:36Z","content_type":"text/html","content_length":"419096","record_id":"<urn:uuid:2f32623b-516c-414c-8406-6accb20d9299>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00551.warc.gz"}
Time value of money, Present Value (PV), Future Value (FV), Net Present Value (NPV), Internal Rate of Return (IRR) Why do I use my current money to invest in the stock market? Because I expect to have more money in the future. Why do I need more money in the future than now? Because of many reasons, the same amount of money will have less purchasing power than today. Therefore my investment needs to generate more money than today to protect my purchasing power in the future. That is the main concept of the time value of money where one dollar today is worth more than one dollar in the future. Present Value (PV), Future Value (FV) At 10% annual growth rate, an investment of 1000$ will be worth 1000 * 110% = 1100$ after 1 year, and will be worth 1000 * 110% * 110% = 1210$ after 2 years. • The future value of 1000$ after 2 years at the rate of 10% is 1210$. • Inversely, the "present value" of 1210$ 2 years ago at the rate of 10% is 1000$. Net Present Value (NPV) Let's say I have 1000$ now and a bank offers me a saving account at a 10% annual rate. At the same time, a public company attracts my attention because it pays a very good amount of dividend each year and has potential growth in the future. The scenario is that I would buy 100 shares at the cost of 10$ each now and hold them for 5 years. Each year I would receive a fairly good and stable amount of dividends, at least 5% yield. After 5 years, I would sell all shares with the expectation that the share price would grow 20%. The table below shows the scenario of cash flows if I buy stocks of that company. Which investment should I choose? Year Amount Type 0 -$1,000.00 Buy 1 $50.00 Dividend 2 $55.00 Dividend 3 $60.00 Dividend 4 $65.00 Dividend 5 $1,200.00 Sell To make that decision, let put it this way: I want to have the same scenario of cash flows as investing into the stock market but by putting money into a saving account at a 10% annual rate. How much money do I need to put into that saving account now? Let's do the math: • At a 10% rate, to have 50$ after 1 year, I need to invest 45.45$ now. Because 50$ is the future value of 45.45$ after 1 year at a 10% rate. • At a 10% rate, to have 55$ after 2 years, I need to invest 45.45$ now. Because 55$ is the future value of 45.45$ after 2 years at a 10% rate. • At a 10% rate, to have 60$ after 3 years, I need to invest 45.08$ now. Because 60$ is the future value of 45.08$ after 3 years at a 10% rate. • At a 10% rate, to have 65$ after 4 years, I need to invest 44.40$ now. Because 65$ is the future value of 44.40$ after 4 years at a 10% rate. • At a 10% rate, to have 1200$ after 5 years, I need to invest 745.11$ now. Because 1200$ is the future value of 745.11$ after 5 years at a 10% rate. Totally, to have the same scenario of cash flows as investing in the stock market, I need to invest 45.45 + 45.45 + 45.08 + 44.40 + 745.11 = 925.49$ now in a saving account at a 10% rate. In other words, to have the same result in the future, investing in the stock market costs me 1000$ now, whereas investing in the saving account at a 10% rate costs me only 925.49$ now. Therefore, in this case, I should better put money in the saving account at 10% rate. Let's suppose that the share prices would grow 50% during the 5 years, which means the last cash flow would be 1500$ in the 5th year. In this case, at a 10% rate, to have 1500$ after 5 years, I need to invest 931.38$ now. Because 1500$ is the future value of 931.38$ after 5 years at a 10% rate. Totally, to have the same scenario of cash flows as investing in the stock market, I need to invest 45.45 + 45.45 + 45.08 + 44.40 + 931.38 = 1089.88$ now in a saving account at a 10% rate. In other words, to have the same result in the future, investing in the stock market costs me only 1000$ now, whereas investing into the saving account at a 10% rate costs me 1089.88$ now. Therefore, in this case, I have a better deal to invest in the stock market than to put money in a saving account at a 10% rate. Moreover, because at year 0 future value equals to present value, I can do like that: • For the first scenario, sum of all present values of future cash flows is: -1000 + 45.45 + 45.45 + 45.08 + 44.40 + 745.11 = -96.40 < 0 • For the second scenario, sum of all present values of future cash flows is: -1000 + 45.45 + 45.45 + 45.08 + 44.40 + 931.38 = 89.88 > 0 What's I have calculated so far is to sum the present values of future cash flows at a defined discount rate and compare it with 0. That's what they call Net Present Value (NPV). If Net Present Value (NPV) is positive, the investment is worth pursuing. Discount rate In this example, I have chosen the saving account at a 10% annual rate as a benchmark to evaluate my investment in the stock market. The 10% annual rate of that saving account is therefore the discount rate in my evaluation. My investment in the stock market must beat 10% annually, otherwise, it is not worth my time and effort because I can easily save that money at 10% annually. The choice of a discount rate is important and depends on personal preferences. Here are a few examples: • A minimum required rate of return for an investment that one sets for herself/himself • An expected rate of return if investing in an alternative asset such as: saving account, real estate, buying a business, etc. • A reference rate of return of the market: S&P 500, CAC 40, etc. Internal Rate of Return (IRR) The discount rate that makes Net Present Value (NPV) equal to zero is called the Internal Rate of Return (IRR). • For the first scenario of cash flows above, that internal rate of return is 8.13%. □ At a 8.13% rate, sum of all present values of future cash flows is: -1000 + 46.24 + 47.04 + 47.46 + 47.54 + 811.72 = 0. □ Because 8.13% < 10%, it confirms once again that the saving account at a 10% rate is a better choice. Year Amount Type Present Value 0 -$1,000.00 Buy -$1,000.00 1 $50.00 Dividend $46.24 2 $55.00 Dividend $47.04 3 $60.00 Dividend $47.46 4 $65.00 Dividend $47.54 5 $1,200.00 Sell $811.72 • For the second scenario of cash flows above, that internal rate of return is 12.57%. □ At a 12.57% rate, sum of all present values of future cash flows is: -1000 + 44.42 + 43.40 + 42.06 + 40.47 + 829.66 = 0. □ Because 12.57% > 10%, it confirms once again that investing in the stock market is a better choice. Year Amount Type Present Value 0 -$1,000.00 Buy -$1,000.00 1 $50.00 Dividend $44.42 2 $55.00 Dividend $43.40 3 $60.00 Dividend $42.06 4 $65.00 Dividend $40.47 5 $1,500.00 Sell $829.66 In summary, Net Present Value (NPV) and Internal Rate of Return (IRR) are two methods that help me to evaluate the performance of an investment. To evaluate an investment with Net Present Value (NPV), I follow the steps below: • Identify all cash flows • Pick a discount rate • Calculate Net Present Value (NPV) by summing all present values of those cash flows • If Net Present Value (NPV) is positive, the investment is worth pursuing To evaluate an investment with Internal Rate of Return (IRR), I follow the steps below: • Identify all cash flows • Pick a discount rate • Calculate the Internal Rate of Return (IRR) rate that makes Net Present Value (NPV) equal to 0 • If Internal Rate of Return (IRR) is bigger than the discount rate, the investment is worth pursuing Performing those steps requires many calculations, and I don't perform them manually. I have leveraged the built-in functions of Google Sheets to do those tasks. In the next posts, I will explain how to calculate Net Present Value (NPV) and Internal Rate of Return (IRR) in Google Sheets, particularly in the context of a stock portfolio. Series: how to calculate internal rate of return (IRR) and net present value (NPV) for a stock portfolio in Google Sheets The post is only for informational purposes and not for trading purposes or financial advice. If you have any feedback, question, or request please: Support this blog If you value my work, please support me with as little as a cup of coffee! I appreciate it. Thank you! Share with your friends If you read it this far, I hope you have enjoyed the content of this post. If you like it, share it with your friends! I own and follow several stocks in my investment portfolio. I pick a reference price for each stock. To effectively track the movement of a stock, I need to visualize its 52-week prices based on the reference price that I determined. In this post, I explain how to do so with the SPARKLINE column chart in Google Sheets. The 52-week range price indicator chart shows the relative position of the current price compared to the 52-week low and the 52-week high price. It visualizes whether the current price is closer to the 52-week low or the 52-week high price. In this post, I explain how to create a 52-week range price indicator chart for stocks by using the SPARKLINE function and the GOOGLEFINANCE function in Google Sheets. After selling a portion of my holdings in a stock, the cost basis for the remain shares of that stock in my portfolio is not simply the sum of all transactions. When selling, I need to decide which shares I want to sell. One of the most common accounting methods is FIFO (first in, first out), meaning that the shares I bought earliest will be the shares I sell first. As you might already know, I use Google Sheets extensively to manage my stock portfolio investment, but, at the moment of writing this post, I find that Google Sheets does not provide a built-in formula for FIFO. Luckily, with lots of effort, I succeeded in building my own FIFO solution in Google Sheets, and I want to share it on this blog. In this post, I explain how to implement FIFO method in Google Sheets to compute cost basis in stocks investing. Although Google Sheets does not provide a ready-to-use function that takes a column index as an input and returns corresponding letters as output, we can still do the task by leveraging other built-in functions ADDRESS , REGEXEXTRACT , INDEX , SPLIT as shown in the post . However, in form of a formula, that solution is not applicable for scripting with Google Apps Script. In this post, we look at how to write a utility function with Google Apps Script that converts column index into corresponding letters. Many functions in Google Sheets return an array as the result. However, I find that there is a lack of built-in support functions in Google Sheets when working with an array. For example, the GOOGLEFINANCE function can return the historical prices of a stock as a table of two columns and the first-row being headers Date and Close. How can I ignore the headers or remove the headers from the results? I have been investing in the stock market for a while. I was looking for a software tool that could help me better manage my portfolio, but, could not find one that satisfied my needs. One day, I discovered that the Google Sheets application has a built-in function called GOOGLEFINANCE which fetches current or historical prices of stocks into spreadsheets. So I thought it is totally possible to build my own personal portfolio tracker with Google Sheets. I can register my transactions in a sheet and use the pivot table, built-in functions such as GOOGLEFINANCE, and Apps Script to automate the computation for daily evolutions of my portfolio as well as the current position for each stock in my portfolio. I then drew some sort of charts within the spreadsheet to have some visual ideas of my portfolio. However, I quickly found it inconvenient to have the charts overlapped the table and to switch back and forth among sheets in the spreadsheet. That's when I came to know the existen Anyone using Google Sheets to manage stock portfolio investment must know how to use the GOOGLEFINANCE function to fetch historical prices of stocks. As I have used it extensively to manage my stock portfolio investment in Google Sheets , I have learned several best practices for using the GOOGLEFINANCE function that I would like to share in this post. As my investment strategy is to buy stocks that pay regular and stable dividends during a long-term period, I need to monitor my dividends income by stocks, by months, and by years, so that I can answer quickly and exactly the following questions: How much dividend did I receive on a given month and a given year? How much dividend did I receive for a given stock in a given year? Have a given stock's annual dividend per share kept increasing gradually over years? Have a given stock's annual dividend yield been stable over years? In this post, I explain how to create a dividend tracker for a stock investment portfolio with Google Sheets by simply using pivot tables. As explained in the post Create personal stock portfolio tracker with Google Sheets and Google Data Studio , a personal stock portfolio tracker consists of 2 main elements: a spreadsheet in Google Sheets and an interactive dashboard in Google Data Studio. The dashboard below is built with Google Data Studio and visualizes data stored in the spreadsheet that can be found in the post Demo stock portfolio tracker with Google Sheets . The dashboard below is not an image, it is a real one and is interactive. You can change some filters to see data from a different perspective. For instance, you can change the date range or select a particular stock. NOTE: An enhanced version was published at Create personal stock portfolio tracker with Google Sheets and Google Data Studio . The first task of building a stock portfolio tracker is to design a solution to register transactions. A transaction is an event when change happens to a stock portfolio, for instance, selling shares of a company, depositing money, or receiving dividends. Transactions are essential inputs to a stock portfolio tracker and it is important to keep track of transactions to make good decisions in investment. In this post, I will explain step by step how to keep track of stock transactions with Google Sheets.
{"url":"https://www.allstacksdeveloper.com/2021/12/time-value-of-money-pv-fv-npv-irr.html","timestamp":"2024-11-06T07:51:03Z","content_type":"application/xhtml+xml","content_length":"188490","record_id":"<urn:uuid:0b2a13df-d44e-426b-beb9-7499288c27b7>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00349.warc.gz"}
What is aesthetics and attributes in ggplot’s world? | R-bloggersWhat is aesthetics and attributes in ggplot’s world? What is aesthetics and attributes in ggplot’s world? [This article was first published on Posts | SERDAR KORUR , and kindly contributed to ]. (You can report issue about the content on this page ) Want to share your content on R-bloggers? if you have a blog, or if you don't. ggplot2 is a powerful data visualization tool of R. Make quick visualizations to explore or share your insights. Learning how aesthetics and attributes are defined in ggplot will give you an edge to develop your skills quickly. ggplot2 tips: distinction between aesthetics and attributes Aesthetics are defined inside aes() in ggplot syntax and attributes are outside the aes(). e.g. ggplot(data, aes(x, y, color=var1) + geom_point(size=6) We typically understand aesthetics as how something looks, color, size etc. But in ggplot’s world how things look is just an attribute. Aesthetics do not refer how something looks, but to which variable is mapped onto it. I will create an imaginary data frame to apply those concepts. points <- 500 # Defining the Golden Angle angle <- pi*(3-sqrt(5)) t <- (1:points) * angle x <- sin(t/2) y <-cos(t/2) z <- rep(c(1,2,3,4,5,6,7,8,9,10), times=50) w <- rep(c(1,2), times=250) df <- data.frame(t, x, y, z, w) # Have a look at the data ## t x y z w ## 1 2.399963 0.9320324 0.36237489 1 1 ## 2 4.799926 0.6754903 -0.73736888 2 2 ## 3 7.199890 -0.4424710 -0.89678282 3 1 ## 4 9.599853 -0.9961710 0.08742572 4 2 ## 5 11.999816 -0.2795038 0.96014460 5 1 ## 6 14.399779 0.7936008 0.60843886 6 2 The dataframe we created has 3 numeric (t, x, y) variables and 2 discrete variables (z, w). With ggplot2 I can map any of the variables on my plot by defining them inside the aes(). # Make a scatter plot of points of a spiral p <- ggplot(df, aes(x*t, y*t)) p + geom_point() Example use of an aesthetics By defining col=factor(z) inside aes(), I can map z to colors. So now the graph shows x, y and also values z. # Make a scatter plot of points in a spiral p <- ggplot(df, aes(x*t, y*t, col=factor(z))) p + geom_point() Each different color now represents different values of z. Example use of an attribute Attribute is how somethings looks. e.g. you can the points bigger by defining size=4. But it does not give any extra information about data. # Make a scatter plot of points in a spiral p <- ggplot(df, aes(x*t, y*t, col=factor(z))) p + geom_point(size = 4) Use shape as an attribute Same goes here. I am changing how something looks like. The data point shape change to 24 which defines a empty triangle. But nothing is mapped onto it. It is just an attribute. # Make a scatter plot of points in a spiral p <- ggplot(df, aes(x*t, y*t, color=factor(z))) p + geom_point(shape=24, size=4) Here, xt, yt and factor(z) is mapped on to our graph. Using shape as an aesthetics By defining shape and color inside aes() I can map w and z to my plot as well. points <- 500 # Defining the Golden Angle angle <- pi*(3-sqrt(5)) t <- (1:points) * angle x <- sin(t) y <-cos(t) z <- rep(c(1,2,3,4,5,6,7,8,9,10), times=50) w <- rep(c(1,2), times=250) df <- data.frame(t, x, y, z, w) p <- ggplot(df, aes(x*t, y*t, shape=factor(w), color=factor(z))) p + geom_point(size=3) Spirals look nice and we got some basics of ggplot. Now let’s use it to create a pattern designer, with Shiny. Many patterns in Nature can be explained by mathematical terms, Shapes of sunflowers, dandelions or snowflakes etc. I will tell the rest of the story in the next update. Now you can play with the app to create your patterns! Until next time!
{"url":"https://www.r-bloggers.com/2019/10/what-is-aesthetics-and-attributes-in-ggplots-world/","timestamp":"2024-11-02T04:52:42Z","content_type":"text/html","content_length":"97102","record_id":"<urn:uuid:fbcbb23a-03ee-472d-9847-8988d0a064d9>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00672.warc.gz"}
Tire Traction What are the limits between the tires and road when turning? An equation is not needed to tell you that given enough force on the tires as you are turning that you bike will start sliding out from under you. In this section, we want to qualitatively describe what happens and quantitatively provide some numerics as to when that happens. What are the road coefficients of resistance? When moving in a straight line, we know we used the Coefficient of Rolling Resistance to describe road resistance. But when our cycle begins to slide out, it is not rolling, rather it is slipping. We started out discussing resistance in general in terms of static and sliding coefficients, and then added in rolling resistance. When a cycle begins to slide out, we have return to these first two road resistance coefficients found in Physics 101 text books. The Coefficient of Static Friction measures how much force is needed to start an object sliding from rest. The Coefficient of Sliding Friction measures how much force is needed to keep an object sliding once it has started. In terms of cycling, the static coefficient is appropriate to when the tire starts to slide and the sliding coefficient is appropriate to the tire continuing to slide. Here are some values on dry concrete: Static= 1.0, sliding = 0.8, and rolling = 0.002. What is moving the cycle in a cornering circle? We said earlier that anything moving in a circle is being “pulled” towards the circle center, but we know nothing is pulling on the cycle. So where does the cornering force come from? The answer is a force can be either a “pull” or a “push.” When cornering, the force turning the cycle is the road pushing back against the tire which is acting upon the cycle. As simple as that sounds, turning involves more than the road pushing on the tire, we have an issue of balance as the cycle wants to continue to move in a straightline resisting the turning motion. We will talk more about lean in a bit. How much traction do I have when turning? How much traction can you count out when cornering? By traction we mean how much force is needed to cause the tire to start sliding? We know that road resistance forces are determined by multiplying the object by a fraction determined by the appropriate coefficient. Let’s get a feel for the tire slip point in a static situation. Assume a 166 lb, block of rubber sitting on a dry concrete road and want to start it sliding. How much force would we need? The coefficient of Static Resistance is 1.0. F = 1.0 * 166 lbs = 166 lbs This gives us an upper limit to how much force can be applied before the tires slide. What happens to traction when a cyclist is leaning? We will have a lot more to say when we get to why a cyclist must lean into a turn. But for the moment, remember this. As a cyclist leans into the turn, the relative portion of their weight into the road contributing to the friction decreases. You might think a cyclist who is at nearly parallel to the road. Virtually no weight is pulling the cycle into the road. Tire Slip Limits for an Elite and Recreational Rider as a function of Lean Angle. Reagan Zogby Traction Takeaways Here are the key takeaways: • Traction is a function of the Static Coefficient of Resistance. Once the wheel starts slipping, it gets easier to continue. • The amount of traction increases the heavier the weight of the object. • As a cyclist leans, the amount of force pushing the tire into the road decreases reducing the tire traction. Next Topic: Cornering Lines
{"url":"https://physicalcycling.com/tire-traction/","timestamp":"2024-11-03T06:11:54Z","content_type":"text/html","content_length":"53287","record_id":"<urn:uuid:cf29db18-4407-4ce3-ac9d-6ef866b8966a>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00218.warc.gz"}
When does 1/2=0 ? (python's integer division Vs Sage's exact fractions) When does 1/2=0 ? (python's integer division Vs Sage's exact fractions) Hi all ! I know that python evaluates 1/2 to zero because it is making an integer division. Well. I was expecting Sage not to do so. It does more or less ... ------ sagess.py ---------- #! /usr/bin/sage -python # -*- coding: utf8 -*- from sage.all import * def MyFunction(x,y): print x/y ------ end of sagess.py ---------- When I launche that script from the command line, I got what Python would give : 18:08:03 ~/script >$ ./sagess.py But when I launch it from the sage'terminal it is less clear how it works : 18:16:04 ~/script >$ sage | Sage Version 4.5.3, Release Date: 2010-09-04 | | Type notebook() for the GUI, and license() for information. | sage: import sagess sage: sagess.MyFunction(1,2) At the import, it returns zero, but when I reask the same computation, it gives the correct 1/2 I know that writing float(x)/y "fixes" the problem. But it looks weird and I want to keep exact values. What do I have to write in my scripts in order to make understand to Sage that I always want 1/2 to be 1/2 ? Thanks Have a good night ! Laurent 2 Answers Sort by ยป oldest newest most voted sage: import sagess sage: sagess.MyFunction(1,2) The reason for the above behavior is that in the second call, the arguments 1 and 2 are Sage Integers, not Python ints: when run interactively, Sage preparses the input, turning all integers into the type Integer. You could reproduce the effect of your original call of MyFunction(1,2) in sagess.py by this: sage: sagess.MyFunction(int(1), int(2)) On the other hand, to get 1/2, use Sage integers: rewrite your file as #! /usr/bin/sage -python # -*- coding: utf8 -*- def MyFunction(x,y): from sage.rings.all import Integer print Integer(x)/Integer(y) This will give the right answer with integer arguments, but it will raise an error if you pass non-integer arguments. edit flag offensive delete link more Ok, so the point is that when making an "interactive" import, the imported module is not preparsed ? def MyFunction(x,y): from sage.rings.all import Integer print Integer(x)/Integer(y) I cannot do that in my real live idea. My purpose was to write a function that convert a point (x,y) into polar coordinates (radius,angle) using atan(y/x). (the aim was to show the algorithm to some Instead, you gave me the idea to write this one : def MyFunction(x,y): print x/y This works fine. Have a good night Laurent edit flag offensive delete link more
{"url":"https://ask.sagemath.org/question/7917/when-does-120-pythons-integer-division-vs-sages-exact-fractions/","timestamp":"2024-11-06T20:05:31Z","content_type":"application/xhtml+xml","content_length":"61466","record_id":"<urn:uuid:cbf5d05d-e613-49b6-8d32-dd334519275e>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00668.warc.gz"}
Renewable Energy Storage Requirements Are Impossible Storage Situation Is Worse Than Others Have Calculated In Grid-Scale Storage of Renewable Energy: The Impossible Dream, Energy Matters (November 20, 2017), using one year of generation data for all of England and Scotland with one hour resolution, Euan Mearns calculated that to avoid outages, 390 watt hours of storage would be needed per watt of average demand. In Is 100 Percent Renewable Energy Possible? (May 25, 2018), using one year of generation data from Texas with one hour resolution, Norman Rogers calculated that storage capacity of 400 watt hours is needed per average watt of wind and solar capacity. In Geophysical constraints on the reliability of solar and wind power in the United States Energy & Environmental Science (Issue 4, 2018), Matt Shaner et al used 36 years of geophysical data for all of North America with one hour resolution to calculate that 400-800 watt hours of storage are needed, depending upon location and the mix of solar and wind. The following graphs were prepared using data from the California Independent System Operator (CISO) with one hour resolution from 1 January 2011 until 30 November 2020, and five minute resolution thereafter, data from the Electric Reliability Council of Texas (ERCOT) with hourly resolution from 2 July 2018, nationwide data from the Hourly Electric Grid Monitor with one hour resolution since 1 July 2018, and data from EU for Denmark, Germany, and EU as a whole with hourly resolution from 1 January 2015 until 30 September 2020. They show the net energy content (or deficit) that would have been in storage, assuming all supply came from renewable sources and storage charge and discharge are 100% efficient. At first, the analyses assume a system with unlimited but empty storage capacity at the beginning of the study period. Analyses are repeated with bounded storage capacity. In early sections, quantities in storage were calculated by assuming average renewable capacity is equal to average demand. In later sections, the effect of average renewable supply being larger than average demand is analyzed. The method of calculation used here is explained below. Daily average for solar and wind The top left graph here shows that solar and wind outputs decrease at the time when demand increases — people come home from work, plug in the EV, turn on the air conditioner, turn on the television, and cook dinner. The bottom left graph shows the average daily trend for solar and wind output, as a fraction of average total demand during the period of analysis. The top right graph shows what fraction of total demand would be satisfied by solar and wind, if they were the only sources and their average output were magnified to equal average demand. The bottom right graph shows the average daily variation of the amount of energy that would be in storage if solar and wind were the only sources and their average output were magnified to equal average demand. The vertical axis is watt hours in storage per watt of average solar + wind production. This rather rosy average-day picture is the basis for claims that only small amounts of storage are necessary. But look carefully and you'll notice that the average daily deficit is four watt hours per watt, while the average surplus is three watt hours per watt: Storage is being continuously California Renewable Electricity in Storage 2011-2024 When a time range longer than one day is considered, it is clear that the daily average is not an adequate description. Some days are better than average, and some are worse. It is necessary to consider the cumulative effect of good and bad days, especially the cumulative effect of consecutive good and bad days. The graph below shows the amount of electrical energy that would have been in storage in California with an all-renewable energy system having capacity equal to average demand. The units of the vertical axis are watt hours in storage per watt of average demand, compared to the amount in storage on 1 January 2011. The “Unweighted” (green) line multiplies the output of all renewable generators by the same factor so that their total average output is equal to average demand. It is unlikely that biomass, biofuel, and hydro can grow much. Environmentalists want to remove dams, and they complain that fracking for geothermal causes earthquakes. The “Weighted Increase” (purple) line is computed by magnifying each renewable's output in proportion to the rate of change of that generation method's label capacity, with a different rate for each method in each year. The maximum surplus calculated using “Weighted Increase” was 645 watt hours per watt on 10 July 2022. The deepest deficit was 552 watt hours per watt on 2 February 2021. To avoid outages and to avoid dumping power when more is available than demand, and storage is already charged to full capacity, a storage system would need to have a capacity of 645 + 552 = 1,198 watt hours per watt of average demand (almost 50 days), and to have been precharged to 554 watt hours per watt of average demand on 1 January 2011 to avoid outages. The effect of precharging would be to shift the graphs upward by 554 watt hours, and the “Weighted Increase” (purple) line would nowhere have been negative. The yearly average maximum and minimum were 372.6 watt-hours per watt and -198.9 watt-hours per watt, an annual swing of 571.5 watt-hours per watt. The surplus-deficit cycle clearly has a period of a year. Although there would, on average, be daily charge-discharge cycles of about 7 watt hours per watt of average demand, during their ten year lifetimes, batteries would be nearly fully charged and discharged ten times. To break even on operating (not capital) costs, they would need to sell electricity at 57 times the usual rate. This assumes that batteries could hold the surplus for six months until it is needed, and wouldn't be damaged by deep discharge cycles. Renewable sources provided 36% of electrical energy. Without storage and with only renewable sources, when the trend of the amount was negative (δ(t) below is less than 0), i.e., 23.7% of the time, there would have been outages. With unlimited storage capacity, not precharged, when the amount in storage was negative and the trend of the amount was negative, i.e., 13.8% of the time, there would have been outages (see How the Graphs are Computed below). The industry definition of firm power is 99.97% availability, or about two hours and forty minutes of outage per year. Tbe eleven-year solar cycle is clearly visible. Total label generating capacity amounts, year by year, for each generation method, were obtained from the California Department of Energy. The rate of change of total renewables’ label generating capacity has not changed significantly since 2012, when solar PV began increasing rapidly, and wind and solar thermal stopped increasing. Nationwide analysis At https://www.eia.gov/electricity/gridmonitor/dashboard/custom/pending, the Energy Information Agency provides nationwide hourly generation data from 1 July 2018 onward. The same analysis was conducted using these data. EIA provides capacity from 2013 to 2022 in Tables 6.7a and 6.7b at https://www.eia.gov/electricity/monthly/ See How the graphs were computed below. The largest surplus was 373 watt hours per watt of demand on 7 November 2018. The deepest deficit as of 16 June 2024 was 1,820 watt hours per watt of demand on 3 April 2024. The storage capacity required to avoid outages is 373 + 1,821 = 2,194 watt hours per watt of average demand, i.e., 91 days of storage capacity would have been necessary to provide firm power. Renewable sources provided 10.27% of nationwide electric energy, or about 3% of total energy. Without storage and with only renewable sources, when the trend of the amount was negative (δ(t)<0), i.e., 61.6% of the time, there would have been outages. With unlimited storage capacity, not precharged, when the amount in storage was negative and the trend of the amount was negative, i.e., 37.2% of the time, there would have been outages. Cost of Storage The May 2020 price for Tesla PowerWall 2 was $0.543 per watt hour (not kilowatt hour) of capacity, including associated electronics but not including installation. Individual installation costs range from $0.142 to $0.214 per watt hour of capacity. Industrial scale systems might get price breaks. Activists insist that an all-electric United States energy economy would have average demand of about 1,700 GWe. Assume that the California requirement of 1,198 watt hours of storage per watt of average demand is adequate forever (this is optimistic). The total cost for Tesla PowerWall 2 storage units, not including installation, with $\mathrm{1,198}×\mathrm{1.7}×{\mathrm{10}}^{\mathrm{12}}= \mathrm{2.04}×{\mathrm{10}}^{\mathrm{15}}$ watt hours' capacity would be $\mathrm{2.1}×{\mathrm{10}}^{\mathrm{15}}×\mathrm{0.543}=\mathrm{1.11}$ quadrillion, or about 58 times total US 2018 GDP (about $20 trillion). Assuming batteries last ten years (the Tesla warranty period), the cost would be 5.8 times total US 2018 GDP per year. The cost for each of America's 128 million households would be about $864,271 per year. If the more pessimistic nationwide analysis is used, the total cost would be 7.9 times total US 2018 GDP per year, and the cost per household would be $1,242,094 per year. Prices that include installation would be 25-40% greater. This very optimistic analysis assumes 100% battery charge and discharge efficiency, and that batteries can hold a 100% charge for six months or more. Lithium ion batteries lose capacity more rapidly at full charge. It is recommended to store them at 50% capacity and moderately low temperatures. If they become completely discharged they are permanently damaged. They're closer to 90% efficient (81% round-trip), so the necessary capacity increase due to efficiency consideration alone would add about 25% more. Taking both installation and the necessary capacity increase into account results in a 75% cost increase to about 13.8 times total GDP every year. Doubling the capacity for optimal long-term storage, and to avoid complete discharge, would increase the cost to about 27.6 times total GDP every year. Elon Musk would have more money than God. California average electricity demand is 26 gigawatts. The 1,700 GWe average demand that activists insist an all-electric American energy economy would have is about 3.83 times total current average electricity demand of 444 GWe. Assuming the same ratio for California, total electricity demand in an all-electric California economy would be 99.6 GWe, so the total storage required would be 99.6×10 ^9×1,233 = 123 trillion watt hours. The cost for California would be $6.7 trillion per year, or “only” about three times total California GDP every year, or about 5.25 times total California GDP when accounting for installation and 81% round-trip efficiency. The energy density of lithium ion batteries is 230 watt hours per kilogram. A capacity of 1.9×10^15 watt hours for the entire USA would weigh 8.26 billion tonnes. A lithium ion battery contains the following ingredients (among others): Metal's Metal's Amount in Metal's Proportion in 8.26 billion tonnes Global Metal Li-ion Battery of Li-Ion Batteries Reserves Requirement (%) (million tonnes) (million tonnes) ÷ Reserve Copper 17.0% 1,404 830 1.69 Aluminum 8.5% 702 32,000 0.022 Nickel 15.2% 1,256 89.0 14.1 Cobalt 2.8% 231 6.9 33.5 Lithium 2.2% 182 14.0 13.0 Graphite 22.0% 1,817 330.0 5.51 Other than aluminum, the Earth does not contain enough metals to make the first generation of necessary batteries for the United States alone! Batteries last about ten years, and are not completely recyclable. Even if the first generation could be built, where would the second generation come from? Presented with these quantities, activists propose other methods, such as pumping water up mountains. In California, where would we get the water and where would we put it? The Oroville Dam at 771 feet or 235 meters is the highest dam in the country. The area of Yosemite Valley is 6 square miles, or about 15 square kilometers. Assuming it's flat (which it isn't), building a 235 meter dam across the entrance could impound 4.17 trillion liters, or 4.17 trillion kilograms, of water. The mouth of the valley is 1,200 meters above sea level, so the top of the full reservoir would be 1,435 meters above sea level. The potential energy, in joules (watt-seconds), of a mass m lifted to a height h in a gravitational field with acceleration g (9.8 meters per second squared at the surface of the Earth) is mgh. Assuming a power plant at sea level, not at the base of the dam, the water in such a reservoir would have potential energy of about 4.17×10^12×1,200×9.8/3600 = 13.6 trillion watt hours. The Betz limit for the efficiency of a turbine is 57%. California would need at least 121 trillion / (0.57×13.6 trillion) = 15.6 of these reservoirs. If the power plant were at the mouth of the valley instead of at sea level, about 97 would be required. The nation as a whole would need almost 1,500. All of this assumes 100% efficiency (after accounting for the Betz limit) and optimal conditions, so in reality much more would be required. Opportunities for reservoirs the size of Yosemite Valley are limited. The Snowy 2 project in Australia is to connect two reservoirs with capacity of 254099 million liters, separated by an elevation of 680 meters. Water is to be pumped between the reservoirs in 27 kilometers of tunnels (Yosemite Valley is 250 kilometers from San Francisco Bay). Capacity as calculated above would be 470 GWe-hours. The advertised efficiencies are 67-76% depending on output of 1,000-2,000 MWe, or 315-357 GWe-hours. Craig Brooking and Michael Bowden provided a detailed technical report. The United States would need 9,600 systems equivalent to Snowy 2. There are currently 1,450 conventional hydroelectric power plants, and 40 pumped storage plants, in the United States. The current budget for Snowy 2 is $AU 4.8 billion = $US 3.26 billion, but many expect the project to exceed $AU 20 billion = $US 13.6 billion. The project is scheduled to be completed in 2028, but many believe it will not be completed. The total cost for 9,600 such projects in the United States would be $130 trillion — if we could find places for them and water to use them. Total rainfall in a particular river's watershed is cyclical. In 2022, Lake Mead was almost empty. Texas has a more difficult problem than California, with water in the East but no mountains, and mountains 1,000 miles to the West but no water. A statistical analysis of data from the Shuttle Radar Topography Mission showed that Kansas is indeed as flat as a pancake. The next proposal is towing weights up mountains or old mine shafts. How many are required? The storage requirement is 1,722 Wh/W × 1,700,000,000,000 W × 3600 seconds/hour = 16.9 quintillion watt seconds, or joules. Assuming 100% efficiency, a ten tonne weight, and a one kilometer lift, the result is “only” 102 million such devices. Where would these be put? How much would they cost, per year, taking into account capital, amortization, operations, maintenance, safety, replacement, decommissioning, environmental effects, and disposal or recycling? The next proposal is hydrogen. The end-to-end electrolyzer-to-fuel-cell efficiency of hydrogen is 22%. The end-to-end electrolyzer-to-gas-turbine efficiency is about 7%. Average renewables' capacity would need to be fourteen times larger than average demand. Renewables' capacity factors are about 25%, so label capacity would need to be 55 times larger than average demand. This moves some of the materials problem noted above from batteries to generators. It is possible to store hydrogen overnight, or for a few days, but the real storage problem is yearly, as the graphs show. Hydrogen leaks through every metal, embrittling it by damaging the crystal and micrograin structures. Methane is easier to store. After the massive methane leak under Kathleen Brown's ranch in Aliso Canyon (Jerry “Moonbeam” Brown's sister), why is anybody seriously proposing to store hydrogen underground? EU Analysis Generation data for solar and wind, and total demand, were obtained at https://github.com/owid/energy-data for EU countries from 1 January 2015 until 30 September 2020. That page does not provide generation data for any other renewable sources. Not having projections for capacity growth, the relative weights for the increase of solar and wind were taken from their average generation growth rates: 24.23% per year for solar and 32.48% for wind (see How the Graphs are Computed below). Unfortunately, data for the 2022 Dunkelflaute were not available. During the interval for which data were available, renewables provided 11.4% of electricity for EU as a whole, 46% for Denmark, and 28.3% for Germany. If solar and wind had been the only generators, for EU as a whole, the largest surplus in storage of unlimited capacity would have been 355 watt hours per watt of average demand, and the deepest deficit would have been 598 watt hours per watt. To provide firm power without dumping energy when batteries are fully charged, 355 + 598 = 953 watt hours of storage capacity per watt of average demand would have been needed. Without storage and without other generation sources, there would have been outages 55.5% of the time, i.e., whenever the slopes of the lines in this graph are negative. Although the patterns for EU as a whole, and for Denmark and Germany alone, are different, the storage situation is almost identical for Germany: 969 watt hours of storage would be required, and there would have been outages 55.2% of the time without storage if the only generators were renewable sources. Denmark fared somewhat better, requiring only 783 watt hours storage, and would have had outages 54.9% of the time without storage and if the only generators were renewable sources. How the graphs were computed To compute the amount of energy accumulated into (or discharged from) storage at any particular instant, using historical data, start by computing the difference δ(t) between what instantaneous power production would be if renewables were the only sources, and instantaneous demand, both in watts per watt of average demand. The amount of energy in storage at time t since the beginning of the analysis, in watt hours per watt of average demand, is then obtained by accumulating the instantaneous power surplus (or deficit) δ(t) in each measurement interval, multiplied by the interval length (energy = power × time), i.e., computing the integral: $S\left(t\right)=\underset{0}{\overset{t}{\int }}\delta \left(\tau \right)d\tau \approx \sum _{n=1}^{N}\delta \left(\mathrm{tn}\right)\Delta \mathrm{tn}$ where δ(t) has units of watts of surplus (or deficit) per watt of average demand, N is the number of measurement instants, Δt[n] (the duration of a measurement interval) has units of hours, and S(t) has units of watt hours per watt of average demand. S(t) is plotted in the graphs. Rectangular quadrature is justified by the fine resolution of measurements — Δt[n] was one hour for California from 1 January 2011 until 30 November 2020, and five minutes thereafter, and one hour for the other data. To use historical data to compute what δ(t) would have been if all sources were renewable sources, it is necessary to increase measured renewables' average production to match average demand. Let $\ stackrel{}{R}$ be current average renewables' production, and $M\stackrel{}{W}$ be the additional average renewables' production needed to match average total demand $\stackrel{}{T}\left(t\right),$ where $\stackrel{}{W}$ is a weighted average of renewables' production, and M is a magnification factor. Then $\stackrel{}{R}+M\stackrel{}{W}=\stackrel{}{T}$, or $M=\frac{\stackrel{}{T}-\stackrel{}{R}}{\stackrel{}{W}}.$ To compute the relationship of S(t) to average total demand, that is, how much storage capacity is needed per watt of average demand, we need $\delta \left(t\right) =\frac{R\left(t\right) +\mathrm{G M}W}{}$ (t) T &OverBar; - T(t) T &OverBar; = R ( t ) T &OverBar; ( t ) + G W ( t ) W &OverBar; ( t ) ( 1 - R &OverBar; ( t ) T &OverBar; ( t ) ) - T(t) T &OverBar; where G is a general growth factor that allows to increase the weighted average of renewables' production above average demand, $R\left(t\right)=\sum _{i=1}^{N}\mathrm{Ri}\left(t\right),$$W\left(t\right)=\sum _{i=1}^{N}\mathrm{gi}\left(t\right)\mathrm{Ri}\left(t\right),$ and $\sum _{i=1}^{N}\mathrm{gi}\left(t\right)= 1.$ As remarked above, g[i](t) were computed as the rate of change of each renewable's generating method, separately in each year for California, and once using a projection for nationwide generation. Therefore, for California, the proportions by which different methods are increased are different each year, and the accumulated surplus (or deficit) of energy in storage is computed as if the generation capacities had been magnified, during that year, to be sufficient to meet average demand. The relationships of rates of increase have not significantly changed in California since about 2012, when solar photovoltaic capacity began increasing rapidly, and construction of new wind capacity stopped. If all g[i](t) were equal and constant, this method would assume that all renewable sources can be magnified by the same amount $\left(\stackrel{}{T}-\stackrel{}{R}\right)/\stackrel{}{W}$ so as to increase their total average output to total average demand (the green line in the graphs). This is not going to happen. For example, environmentalists want to remove dams, not build more of them. In the initial analysis we assumed G = 1. Later, we examine the effect of larger G. Because neither average demand nor average renewables' production are constant, the “instantaneous” average demand and production were computed using least-squares fits to C^2-continuous cubic splines, with constraints on the slopes (m[i]) at the ends of the interval given by least-squares fits to straight lines, $\stackrel{}{R}i\left(t\right)\simeq \mathrm{mi}t+\mathrm{bi}$. The slope constraints are necessary because the interval of analysis does not necessarily begin or end at the end of a year. Without it, the “instantaneous” average near the beginning or end of the interval would be anomalously small or large compared to a similar instant in the middle of the interval. (A “cubic spline” is a piecewise curve composed of cubic polynomials. The “C^2-continuous” term means that its value, slope, and curvature are continuous everywhere.) The “Wt Average” line here is for the weighted average $\stackrel{}{W}\left(t\right)$. An example of this method to compute the “instantaneous” average is illustrated for total demand. The end point yearly averages might be anomalously large and small compared to other years when data are available for only a fraction of those years. The “Yearly Average” mark is placed at January 1 of each year. These analyses are simplistically optimistic. There are several factors not considered that would increase the storage requirement by amounts not quantified here: • Lithium-ion batteries cannot hold full charge for six months. They lose capacity more rapidly at full charge. It is recommended that they be charged to 50% capacity, and maintained at moderate • If batteries are fully discharged they are permanently damaged. • Batteries can be damaged by being discharged too rapidly. • Batteries can be charged only at 20-25% of their maximum discharge rate. • The round-trip charge-discharge efficiency is about 80%, and depends on charge rate, discharge rate, and temperature history. How to read the graphs The quantity δ(t) in How the Graphs are Computed is the slope of the lines in the graphs of storage content. Where δ(t) > 0 and therefore S(t) is increasing, more electricity was produced than demand, and energy would have been flowing into storage. Where δ(t) < 0 and therefore S(t) is decreasing, less electricity was produced than demand, and energy would have been withdrawn from storage. Where S(t) > 0, renewable sources plus stored electricity produced sufficient power to satisfy demand. Where S(t) < 0, renewable sources plus stored electricity did not produce sufficient power to satisfy demand, and outages would occur where S(t) is decreasing (δ(t) < 0), for example, between November and March. This shows the necessity for non-renewable sources — coal, gas, and nuclear — or significant storage, in renewable electricity systems. Observe that in mid 2020, total energy that would be in storage as a result of all renewables being increased equally, and renewables having produced more than demand, was about 400 watt hours per average watt of capacity. When the amount in storage is negative, for example between November 2020 and June 2021, any time that demand exceeds supply, i.e., δ(t) < 0, there would be outages. If an all-renewable generating system had been in place in California on 1 January 2011, with a storage system having capacity less than about 1,180 watt hours per watt of average demand, and had not been precharged to 706 watt hours per watt of average demand, there would have been prolonged outages. What causes the variation in stored energy? There is clearly a difference between generation during days and nights. Annual variation in stored energy is caused by renewables' generating capacity not being synchronous with demand. Demand begins to increase each year just as renewables' output is beginning to decrease. The capacity factor for demand is total demand divided by total generating capacity. This assumes that dispatchable sources, such as gas, are adjusted so their output equals their demand. Other capacity factors are computed by dividing output by label capacity. The following graphs show the capacity factors in California for demand and renewables, so as to remove the effect of inter-annual demand and generation variation, and changing capacity. The yearly periodic asynchronous relationship of renewables' output compared to demand is evident in the second graph above. Phases of yearly variation were computed by fitting each phenomenon to β[1] sin ( ω t ) + β[2] cos ( ω t ), where ω = 2 π / 8765.81 (the number of hours per year), and t is time in hours since the beginning of the period of analysis (1 January 2011). The phase of each phenomenon with respect to the beginning of the period of analysis is then tan^-1 ( β[2] / β[1] ), and the difference in phases is 46 days, i.e., demand begins to increase about 46 days after output from renewable sources begins to decline. With the limited amount of data available (twelve years), by fitting to β[1] sin ( ω t ) + β[2] cos ( ω t ) + β[3] sin ( λ t ) + β[4] cos ( λ t ), where ω is as above and λ is to be found, a longer term variation with a period of 8.24 years was found. The phase differences of this variation, tan^-1 ( β[4] / β[3] ), compared to demand, range from -28 days (wind) to +172 days (solar) to +515 days (hydro). The average phase difference between demand and renewables is 45 days. Each time that more data are used, the solved-for period ( λ / 2 π ) increases. Long-term variation frequencies are probably related to the Sun's eleven year activity cycle. There might be even longer term variations that are related to solar activity cycles of about 70 and 1,500 years, but these cannot be measured by using only twelve years of generation data. Effect of increasing generating capacity above average demand California with 100 hours' storage and 1.25 × capacity California data were analyzed again with average renewables' generating capacity increased to G = 1.25 times average demand, with the same relative output magnifications g[i](t), and a 100 Wh/W storage capacity. The “flat line” bounding the maximum storage amount means that excess generation would be dumped. Solar thermal and wind output can be adjusted somewhat but solar PV output cannot be adjusted if the panels have fixed mountings. Articulated mountings would be very expensive. 776,000 gigawatt hours of output — 44% of total demand — would have been dumped. There would have been outages 19% of the time, i.e., when the slope of the line in this graph is negative. California with 12 hours' storage and 3 × capacity If average renewables' generating capacity were to have been increased to G = 3 times average demand, and 12 hours' storage were provided, as is claimed to be sufficient by many activists, there would have been outages 3.4% of the time, i.e., when the slope of the line in this graph is negative. 6,300,000 gigawatt hours of output — 355% of total demand — would have been dumped. The cost for only twelve hours' storage, for an all-electric 1.7 TWe American energy economy, would be $11.1 trillion, or about $1.1 trillion per year, or 5.5% of total GDP. The cost for each of America's 128 million households would be about $8,654 per year for batteries alone. Renewables provided 33.4% of California electricity between 1 January 2011 and 1 January 2023. Electricity satisfies about one third of total California energy demand. To provide all California energy from renewable electricity sources whose average generating capacity is three times average demand would require a capacity increase of 3 × 3 / 0.334 = 2695% above the capacity to satisfy all current California electricity demand. Increasing hydro at all, or increasing biogas, biomass and geothermal by 2695%, is unlikely. Activists demanded, and California and Oregon acquiesced, that three dams on the Klamath river be removed to restore salmon habitat. Unusually heavy rainfall wiped out that salmon habitat because the dams' flood-control function no longer existed. Texas with twelve to 600 hours' storage Data for Texas from 2 July 2018 through 30 June 2023 were analyzed with average renewables' capacity equal to average demand, storage from 12 to 600 hours, and unlimited storage. With 1,200 watt hours of storage per watt of average demand, pre-charged to 323 watt hours per watt of average demand, outages would have been completely avoided. │ As of 30 April 2023 │ │ Storage │Outage│Dumped│ │12 Wh/W │46.8% │1.69% │ │100 Wh/W │44.4% │1.69% │ │200 Wh/W │41.9% │1.58% │ │300 Wh/W │37.1% │1.35% │ │400 Wh/W │31.6% │1.12% │ │500 Wh/W │26.9% │0.88% │ │600 Wh/W │18.1% │0.65% │ │Unlimited │4.4% │0% │ Energy return on invested energy Dumping output reduces the energy return on energy invested (EROI). EROI at least seven is required for economic viability. With storage, and even without dumping, solar PV and wind are not viable without subsidies. Subsidies do not eliminate costs — they just hide them in your tax bill where politicians hope you won't notice them — so they do not actually make solar and wind viable. California appears to have stopped building solar thermal generators, and the EIA does not predict any increase in US solar thermal capacity. (Solar CSP means Concentrated Solar Power or solar thermal) Daniel Weißbach, G. Ruprecht, A. Huke, K, Czerski, S. Gottlieb, and A. Hussein, Energy intensities, EROIs (energy returned on invested), energy payback times of electricity generating plants, Energy 52, 1 (April 2013) pp 210-221 Preprint at https://festkoerper-kernphysik.de/Weissbach_EROI_preprint.pdf D. Weißbach, F. Herrmann, G. Ruprecht, A. Huke, K, Czerski, S. Gottlieb, and A. Hussein, Energy Intensities, EROI (energy return on invested), for energy sources, EPJ Web of Conferences 189, 00016 (2028) https://www.epj-conferences.org/articles/epjconf/pdf/2018/24/epjconf_eps-sif2018_00016.pdf If 355% of total renewables' output were dumped, the EROI from solar PV would be reduced to 0.56, i.e., less energy would be produced than invested in the devices. Where would that extra required energy come from? The EROI from wind would be reduced to 1.13. Problem with increasing capacity The problem with renewable energy in general, and increasing capacity in particular, is materials. Professor Simon Michaux at Geologian Tutkimuskeskus — Geological Research Center or Geological Survey of Finland — has quantified the problem. For copper alone, if production were to continue at the 2019 rate, 189 years would be required to build the “technology units” demanded by the IEA. The amount required is almost six times the total amount that humans have so far extracted from the Earth, and five times more than is known to exist in forms that can be extracted. If all known reserves were completely used, 19% of the units could be built. Environmentalists want more solar panels and wind turbines and batteries and electric vehicles and ... but block new mines. See the main article and The Great Green Energy Transition is Impossible for details. The next supervolcano eruption This discussion assumes that the period analyzed includes the deepest deficit that will ever occur — which is, of course, false. When Mount Tambora on the island of Sumbawa in Indonesia erupts again and produces another “year without a summer” such as in 1816 — and it or another one as large definitely will, the only question is when — there will be no times for several years when δ(t) > 0. The trend of storage content will always and everywhere be downward. The deepest deficit will be far deeper than any shown here. No physically feasible or economically viable amount of storage could suffice. Renewable generation capacity and storage capacity could not be increased sufficiently rapidly. There would be energy available for only a small fraction of demand. Politicians' homes, and (maybe) hospitals, would have first priority. Civilization would collapse. Typos? Mistakes? Quibble with the analysis? Want the software and data I used? van dot snyder at sbcglobal dot net.
{"url":"http://vandyke.mynetgear.com/Worse.html","timestamp":"2024-11-11T11:05:00Z","content_type":"text/html","content_length":"47859","record_id":"<urn:uuid:7119ec95-cdd5-4174-8095-7dc9410e626f>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00286.warc.gz"}
AC Current And Voltage By Rotating Vectors Direct Current (DC) does not change its direction from time to time. However, the direct current sources are considered to be the DC sources. This type of charging current found in a circuit is known as the Alternating Current or AC. Thus, AC is the current that alters and signifies how the electrons move from one direction in the opposite way of a conductor. Normally, it flows from the negative to the positive direction, unlike other electronics at home. Resistors, Inductors, and Capacitors Many electrical circuits feature resistors, inductors, and capacitors connected across an AC source or a combination of any two or all three of these components connected across an AC source. When a resistor is used, the current flowing through it is in phase with the voltage source. In the case of a capacitor or an inductor, however, either the current lags or it leads the voltage source by some amount. This is where the concept of phasors comes into play to connect the current and voltage. The sinusoidally fluctuating values V and I are represented by the vertical components of phasors I and V. The amplitudes or peak values Im and Vm of these oscillating quantities are represented by the magnitudes of phasors I and V. The projection of voltage and current phasors on the vertical axis are the current and voltage values at that moment, respectively. What is an inductor? An inductor is a passive component that is majorly used in power electronic circuits for storing magnetic energy. In an electric current, the inductor plays the most important part, where most power electronic circuits store the excess amount of energy in a magnetic energy form. The inductor is also further defined as the reactor, choke, or coil. The inductor consumes the excessive charge or loses power when it is urgently needed and serves all other electrical uses. These are further used for balancing the current flowing within it. Furthermore, the inductor can also help measure the amount of voltage obtained for a particular rate of change in the current. Relationship between voltage and current in the AC circuit When the AC voltage is applied to an inductor, the relationship between voltage and current in the AC circuit is as follows: V= L×di/dt di/dt= V/L di/dt= V[m]sinωt/L…. (1) After integrating equation (1) i = ∫ V[m]sinωt/ Ld×t i = – V[m]cos ωt/wL i = V[m]sin (ωt- 90)/ ωL i = i[m]sin (ωt- 90) Where, V[m] = Maximum voltage, ω = Angular frequency, I[m]– Maximum value of current. The expression for inductive reactance: V= V[m]sinωt i= i[m]sin( ωt – 90) i = -i[m]cosωt i[m] = V[m]/ ωL i[m] = V[m]/X[L] Where, XL = Inductive reactance, Inductive reactance X[L] = ωL, and Inductive X[L] = 2πfL. The SI unit of inductive reactance is denoted by Ohm. It is equivalent to the frequency of the angular, which means it increases when the reactance is increased and decreases when the reactance is decreased. Moreover, if the value of the DC is 0, X[L] will also become 0. AC Voltage An AC power source’s output is sinusoidal and fluctuates with time according to the equation: v(t)=V0sin(ωt). The instantaneous voltage is v(t). V0 is the source’s maximum output voltage, often known as the voltage amplitude (book Vmax), which is the AC voltage’s angular frequency. Rules for Drawing a Phasor Diagram Rule 1: The phasor’s length is proportional to the amplitude of the wave being portrayed. Rule 2: In circuits with L, C, and R in series, the phasor indicating current is usually shown horizontally and referred to as the reference phasor. This is because all components in a series circuit share the same current. Rule 3: The phasor representing the supply voltage is drawn always in the reference direction in parallel circuits with L, C, and R connected in parallel. This is because all components in a parallel circuit share the same supply voltage. Rule 4: All phasors are regarded to rotate in an anticlockwise orientation. Rule 5: For each phasor in a diagram, the same sort of value (RMS, peak, etc.) is utilised – not a mix of values. Direct Current (DC) does not change its direction from time to time. However, the direct current sources are considered to be DC sources. Many electrical circuits feature resistors, inductors, and capacitors connected across an AC source or a combination of any two or all three of these components connected across an AC source. In an electric current, the inductor plays the most important part, where most power electronic circuits store the excess amount of energy in a magnetic energy form. At point O, a point object is placed on the principal axis of the concave surface. The ray OS, incident at the point S of the concave surface, travels along SR after refraction. As n2>n1, the refracted ray bends towards the normal CSN. Another ray OP, incident normally on the concave surface, is undeviated. The two refracted rays do not intersect in reality but appear to meet at point I in Medium 1. Thus, I is the virtual image of the object placed at O. In Figure,i=∠OSC,r=∠NSR=∠ISC, PO= object distance =-u, PI= image distance =-v and radius of curvature =PC=-R. Let SCP=,∠SIP= and SOP=. Snell’s law is applied at the point of refraction S, For paraxial rays or small aperture, i and r will be small. Hence, sin⁡ii and sin⁡rr. Making this assumption, we can write as If Medium 1 be air n1=1 and Medium 2 has a refractive index n2=n, then we can write the above Equation as n/v – 1/u = n-1 / R. When an object is placed in the denser medium, the image of the object is formed by a concave surface dividing the two mediums. This is shown in the Figure below. Refraction at a convex surface: When light refracts on a convex surface, it creates a real or virtual image. We will discuss the two cases separately. When the image is virtual: Consider a convex surface AB separating two media of refractive indices. When n2n2>n1, as shown in Figure. P is pole, and C is the centre of curvature of the convex surface. At the point 0, point objects are placed on the principal axis of the curved surface. The ray OS, incident at the point S on the convex surface, travels along ST after refraction. As n2>n1, the refracted ray bends towards the Normal CSN. Another ray OP incident, normally on the convex surface, is undeviated. The two refracted rays do not intersect in reality but appear to meet at point I. Thus, I is the image of the object placed at O. OSN=i,∠CST= NSI =r PO=-u,PI=-v ,PC=+R. Let SCP=,∠SIP= and SOP=. Applying Snell’s law at the point of refraction, n1sin⁡i=n2sin⁡r. For paraxial rays, i and r will be small. Hence, sin⁡i=i and sin⁡r=r. Making this substitution in Equation n1sin⁡i=n2sin⁡r. When the image is real: Consider a convex surface AB separating two media of refractive indices n1 and n2 (n2>n1), as shown in Figure. Let P be the pole and C be the centre of curvature of the convex surface. Let a point object be placed at point O on the principal axis of the convex surface. The ray OS, incident at S, goes along SI after refraction at S. As n2>n1, the refracted ray bends towards the normal CSN. Another ray OP, undeviated. The two refracted rays intersect at I in Medium 2 . Thus, I is the real image of the object placed at O. Lateral magnification: Consider an extended object AO placed in Medium 1 (refractive index n1 ) facing a convex spherical surface MPN of Medium 2 (refractive index n2 ). Rays originating from A and O directed towards centre C will travel undeviated into Medium 2. Let the image formed be BI, as shown in Figure. Its position may be located by using Equation With this, we come to the end of the topic of Refraction at Spherical Surfaces and by Lens. Refraction is the bending of light (it also happens with sound, water and other waves) because it passes from one item into another. We hope the topic is clear to everyone.
{"url":"https://unacademy.com/content/cbse-class-11/study-material/physics/ac-current-and-voltage-by-rotating-vectors/","timestamp":"2024-11-13T01:49:35Z","content_type":"text/html","content_length":"668331","record_id":"<urn:uuid:62899a97-1384-46c5-b821-406a1d023b31>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00503.warc.gz"}
Accelerating Vector Search: NVIDIA cuVS IVF-PQ Part 2, Performance Tuning | NVIDIA Technical Blog In the first part of the series, we presented an overview of the IVF-PQ algorithm and explained how it builds on top of the IVF-Flat algorithm, using the Product Quantization (PQ) technique to compress the index and support larger datasets. In this part two of the IVF-PQ post, we cover the practical aspects of tuning IVF-PQ performance. It’s worth noting again that IVF-PQ uses a lossy compression, which affects the quality of the results (measured in terms of recall). We explore the algorithm hyper-parameters, which control memory footprint and search performance, to tune for a wide range of desired recall levels. We also cover the refinement reranking technique, which can be an important step for achieving high recall with PQ compression, especially when working with billion-scale datasets. All of the index build and search parameters described in this blog post are described in cuVS’s API reference documentation (C, C++, Python, and Rust). Also helpful is the cuVS IVF-PQ tutorial notebook and projects to get started. Tuning parameters for index building Since IVF-PQ belongs to the same family of IVF methods as IVF-Flat, some parameters are shared, such as the coarse-level indexing and search hyper-paremeters. Please see our prior blog post on IVF-Flat for an introduction and guide to tuning these parameters. While these parameters have the same names, there are a few details specific to IVF-PQ that are worth considering when setting them. Aside from these shared parameters, IVF-PQ adds a few additional parameters that control the compression. The n_lists parameter has the same meaning as in IVF-Flat: the number of partitions (inverted lists), into which to cluster the input dataset. During search, the algorithm scans through some of the lists. Both the number of lists probed and their sizes affect the performance. Figure 1 presents the QPS/recall curves for the DEEP-100M dataset for a wide range of partitions (n_lists). Each curve represents the number of queries per section (QPS) and the recall values with varying search parameters. In this case, we’re varying the number of probed lists in the range between 10 and 5000. Typically, one decides on the desired recall level in advance and looks for the hyper-parameter combination that yields the highest QPS. Often, there’s no single parameter combination that gives best QPS for all recall levels. QPS/recall curves are a good way to visualize the trade-off and select an appropriate configuration for the desired recall range. Figure 1. Effects of n_lists on QPS/recall trade-off. Depending on the desired recall, the models with 10K and 20K clusters might be most desirable as they have the best throughput for a wide range of recall values. The n_lists = 5K curve has significantly lower throughput than the other settings across the whole recall range and the 200K and 500K curves also have very low throughput compared to the others. The above experiment suggests that the n_lists in the range of 10K to 50K are likely to yield good performance across recall levels, though a good setting for this parameter will often still depend on the dataset. pq_dim, pq_bits Compression is mainly controlled with the pq_dim parameter and often a good technique for tuning this parameter is to start with one fourth the number of features in your dataset, increasing it in An interesting fact about the IVF-PQ search is that its core component, the fine search step, doesn’t depend on the original dimensionality of the data. On the one hand, it scans through the quantized, encoded database vectors, which depends only on pq_bits and pq_dim. On the other hand, the search vectors are transformed to a pq_dim·pq_len representation using a relatively cheap matrix multiplication (GEMM) operation early in the search pipeline. This allows us to experiment freely on the pq_dim parameter range. It’s also possible to set pq_dim larger than the dim, although this likely won’t improve the recall. Figure 2. Effects of pq_dim on QPS/recall trade-off. Note the logarithmic scale of the vertical axis. Figure 2 demonstrates the effects of setting a range of pq_dim values for the IVF-PQ model on a 10 million vector subsample of the DEEP-1B dataset. This dataset has only 96 dimensions, and when the pq_dim is set larger than the number of dimensions, a random transformation is performed to increase the feature space. Note, this does not make much sense in practice, except for benchmarking the QPS of the model, but in this experiment we’ve opted to keep using the DEEP dataset or consistency with the rest of this blog post. Figure 2 illustrates significant drops in QPS, which happen for several reasons, and cannot be explained by the increasing throughput alone. We’ll have a closer look at the curves as pq_dim First, changing pq_dim from 128 to 256 or 384 effectively doubles or triples the amount of compute work and the size of required shared memory per CUDA block. This likely leads to lower GPU occupancy and, as a result, the QPS drops by a factor of three or more. Second, changing pq_dim to 512 or more likely triggers the look-up table (LUT) placement in the global memory. Otherwise, there is not enough shared memory per CUDA block to launch the kernel. This leads to another 4x slowdown. The pq_bits parameter is the number of bits used in each individual PQ code. It controls the codebook size (2^pq_bits), or the number of possible values each code can have. IVF-PQ supports the codebook sizes from 16 to 256, which means pq_bits can be in the range of [4, 8]. pq_bits also has an effect on the compression. For example, an index with pq_bits = 4 is going to be twice as small as with the pq_bits = 8. Though much stronger pq_bits affects the size of the LUT, as it is proportional to 2^pq_bits. This has a drastic effect on recall. For more details about the lut_size formula, see the Product quantization section in the first part of this post. A few things to consider about pq_bits: • It is required that (pq_dim * pq_bits) divide evenly by 8. In general, keeping pq_dim in powers of two improves data alignment, and thus, the search performance. • Using pq_dim * pq_bits >= 128 and having (pq_dim * pq_bits) divide evenly by 32 maximizes the GPU memory bandwidth utilization. • In general, a good starting point is setting pq_bits = 8 and decreasing from there. • The recall loss due to smaller pq_bits can be compensated by performing a refinement step (more on this in the Refinement section). • For extremely high-dimensional data (more than a few hundreds of dimensions), and large pq_dims, a lower pq_bits setting can yield a drastic search speedup because the LUT can be made small enough to fit in shared memory. • A small pq_bits value can reduce shared memory bank conflicts, which can improve the QPS. • Alternatively, as we will see later in this blog, setting the search parameter lut_dtype to reduced precision floating-point (fp8) may be also enough to keep the LUT in shared memory. The codebook_kind parameter determines how the codebooks for the second-level quantizer are constructed. The second-level quantizers are trained either for each subspace or for each cluster: 1. subspace creates pq_dim second-level quantizers, one for each slice of the data along the feature space (columns). 2. cluster creates n_lists second-level quantizers, one for each first-level cluster. In both settings, the centroids are found using k-means clustering, interpreting the data as having pq_len dimensions. There is no definitive way to determine in advance which of the two options will yield better performance for a particular use-case but the following guidelines may help: • A per-cluster codebook tends to take longer to train, as n_lists is usually much higher than pq_dim (more codebooks to train). • Search with a per-cluster codebook usually has better utilization the shared memory of the GPU than with a per-subspace codebook. This may result in a faster search when the LUT is large and occupies a significant part of the GPU shared memory. • In practice, the per-subspace codebook tends to result in slightly higher recall. kmeans_n_iters, kmeans_trainset_fraction These parameters are passed to the k-means algorithm in both IVF-Flat and IVF-PQ to cluster the data during index creation. In addition, kmeans_n_iters is passed down to the same k-means algorithm when constructing the codebooks. In practice, we have found the kmeans_n_iters parameter rarely needs to be adjusted. To learn more about the available index build parameters for IVF-PQ, you can refer to the API reference documentation, which includes the C, C++, Python, and Rust APIs. Tuning parameters for search Refer to our previous blog post on IVF-Flat for information on the n_probes parameter. The n_probes / n_lists ratio determines which fraction of the dataset is searched, therefore it has a strong effect on the search accuracy and throughput. n_probes is the most important search parameter, but IVF-PQ provides a few more knobs to adjust the internal workings of the algorithm. The internal_distance_dtype parameter controls the representation of the distance or similarity during the search. By default, this is the 32-bit float (numpy.float32), but changing it to 16-bit float (numpy.float16) can save memory bandwidth where appropriate. For example, float16 can be useful when the dataset datatype is low precision anyway (8-bit int, for example), though it can help with 32-bit float datasets too. lut_dtype is the datatype used to store the look-up table (LUT). The PQ algorithm stores data in the product quantizer encoded format, which needs to be decoded during the second-phase (in-cluster) search. Thus, the algorithm constructs a separate LUT for each cluster. Constructing the look-up tables can be costly, and the tables can be rather large. By default, the individual elements in the table are stored as 32-bit floats but you can change this to 16-bit or 8-bit to reduce the table size. Ideally, the LUT should be able to fit in the shared memory of the GPU, however, this is not the case for datasets with very large dimensionality. The logic of deciding whether this table should remain in the shared or global memory of the GPU is somewhat complicated, but you can see the outcome when gradually changing pq_dim and observing a sudden drop in QPS after a certain point (Figure 3). When the LUT is placed in the shared memory, the search speed tends to be 2-5x faster compared to a similar search with the LUT in the global memory. An additional benefit to changing the lut_dtype instead of, say, build parameters like the pq_dim, is that you can reduce the LUT by a factor of 2x or 4x without having to rebuild the index. A few additional things to note: • It is not possible (and does not make sense) to set the lut_dtype to a more precise type than the internal_distance_dtype, as the former is still converted to the latter for computing the • Smaller is not always faster. In some cases, setting lut_dtype = internal_distance_dtype yields better performance than setting a smaller lut_dtype because it saves a few cycles on data conversion in a performance-critical loop. Figure 3. Effects of search parameters on QPS/recall trade-off on NVIDIA H100 GPU with DEEP-100M dataset. Figure 3 demonstrates the trade-offs for different combinations of the internal search data types. The XX/YY labels denote the bit-precision of the used internal_distance_dtype/lut_dtype pair. Depending on the GPU, the selected dataset, and the batch size, you may see different results. With the DEEP-100M dataset, pq_dim = dim = 96, pq_bits = 8, and the batch size of 10 queries at a time, the 8-bit lut_dtype appears to come at a significant cost to recall. Selecting a 16-bit type for one or both parameters does not seem to reduce the recall in this case, but does improve the QPS. Improving recall with refinement Tweaking the search and indexing parameters is not always enough to make the necessary impact, with the recall still lower than necessary due to the PQ compression. You could try using IVF-Flat as an alternative but it becomes problematic for billion-scale datasets because IVF-Flat offers no index compression. Refinement offers another promising alternative. Refinement is a separate operation that can be performed after the ANN search, and can be applied after IVF-PQ to improve the recall lost from the compression. The refinement operation recomputes the exact distances for the already selected candidates and selects a subset of them. This operation is often referred to as reranking by other vector search libraries, and would usually be performed with a number of candidates that is larger than the original number of k candidates being requested by the user. . Here is a small pseudocode snippet to illustrate how the refinement follows IVF-PQ search: candidate_dists, candidates = ivf_pq.search(search_params, index, queries, k * 2) neighbor_dists, neighbors = refine(dataset, queries, candidates, k) For an example of the refinement API, refer to our example projects, the cuVS API reference docs, or the tutorial IVF-PQ notebook. Although not required, we usually set the number of neighbor candidates initially queried to an integer multiple of the desired number of neighbors. This multiple is referred to as the refine_ratio throughout cuVS’s ANN APIs. The Search performance section in the first part of this post shows how searching with x2 refinement improves the recall by a large margin (from 0.85 to 0.95). Figure 5 shows the internal_distance_dtype/lut_dtype experiment from Figure 4 compensated with the refinement. We use ratio 2 and 4 to see the change in recall and QPS. The XX/YY-xZ labels denote the bit-precision of the used internal_distance_dtype/lut_dtype pair and the level of the refinement. Figure 4. Effects of search parameters on QPS/recall trade-off, compensated with the refinement operation. We applied the refinement operation after the search with ratios x1 (no refinement), x2, and x4, and dropped some of the search configurations to declutter the plot. Note that refinement requires access to the source dataset, so this should be considered when calculating the memory footprint. We keep the index on the GPU and the dataset on the host, so the refinement operation is performed on CPU, although cuVS does support both modes. Figure 4 shows that refinement compensates for the errors arising from the PQ compression. Ratio x2 achieves the 0.99 recall while increasing to x4 boosts the maximum recall even further at the cost of QPS in the lower-recall range. Judging by the left tails of the curves, the refinement in this case comes with ~25% cost to the QPS. The accelerating vector search with inverted-file indexes series covers two cuVS algorithms: IVF-Flat and IVF-PQ. The IVF-Flat algorithm represents an intuitive transition from the exact KNN to a flexible ANN. IVF-PQ extends IVF-Flat with PQ compression, which further speeds up the search, making it possible to process billion-scale datasets with limited GPU memory. We presented many techniques for fine-tuning parameters for index building and search. Knowing when to swap accuracy for performance, data practitioners can gather the best results efficiently. The NVIDIA cuVS library provides a range of vector search algorithms to accelerate the wide variety of use cases, from the exact search to low-accuracy-high-QPS ANN methods, covering million-scale to billion-scale problem sizes and possibly beyond. To practice tuning IVF-PQ parameters for your dataset, check out our IVF-PQ notebook on GitHub. To further explore the provided APIs, see the cuVS documentation.
{"url":"https://developer.nvidia.com/blog/accelerating-vector-search-nvidia-cuvs-ivf-pq-performance-tuning-part-2/","timestamp":"2024-11-09T04:20:04Z","content_type":"text/html","content_length":"235608","record_id":"<urn:uuid:94aa6199-0ed1-4a0c-be3f-9901fac4f3e2>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00126.warc.gz"}
Chaos, Fractals and Dynamic Systems Chaos, Fractals and Dynamic Systems. Instructor: Prof. S. Banerjee, Department of Electrical Engineering, IIT Kharagpur. The course covers lessons in Representations of Dynamical Systems, Vector Fields of Nonlinear Systems, Limit Cycles, The Lorenz Equation, The Rossler Equation and Forced Pendulum, The Chua's Circuit, Discrete Time Dynamical Systems, The Logistic Map and Period Doubling, Flip and Tangent Bifurcations, Intermittency Transcritical and Pitchfork, Two Dimensional Maps, Mandelbrot Sets and Julia Sets, Stable and Unstable Manifolds, The Monodromy Matrix and the Saltation Matrix. (from nptel.ac.in) Representations of Dynamical Systems Chaos, Fractals and Dynamic Systems Instructor: Prof. S. Banerjee, Department of Electrical Engineering, IIT Kharagpur. The course covers lessons in Representations of Dynamical Systems, Vector Fields of Nonlinear Systems, Limit Cycles, The Lorenz Equation, ...
{"url":"http://www.infocobuild.com/education/audio-video-courses/electronics/chaos-fractals-dynamic-systems-iit-kharagpur.html","timestamp":"2024-11-08T15:46:30Z","content_type":"text/html","content_length":"14017","record_id":"<urn:uuid:9111b85b-08b7-42b3-b8d4-79374971d667>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00291.warc.gz"}
units - Converts units from one measure to another units [-] [file] The units command converts quantities expressed in one measurement to their equivalents in another. The units command is an interactive command. It prompts you for the unit you want to convert from and the unit you want to convert to. The units command does multiplicative scale changes only. That is, units can convert from one value to another only when the conversion is done with a multiplication factor. For example, units cannot convert between degrees Fahrenheit and degrees Celsius because the value of 32 must be added or subtracted in the conversion. You can specify a quantity as a multiplicative combination of units, optionally preceded by a numeric multiplier. Indicate powers by entering suffixed positive integers and indicate division with a / (slash). The units command recognizes lb as a unit of mass, but considers pound to be the British pound sterling. Compound names are run together (for example, lightyear). Prefix British units differing from their American counterparts with br (for example, brgallon). The /usr/share/lib/units file contains a complete list of the units that the units command uses. Most familiar units, abbreviations, and metric prefixes are recognized by the units command, together with the following: Ratio of circumference to diameter. Speed of light. Charge on an electron. Acceleration of gravity. Same as g. Avogadro's number. Pressure head per unit height of water. Astronomical unit. The - argument causes units to display a list of all known units and their conversion values. The file argument specifies an alternative units file to be used instead of the default file units. To start the units command, enter: units Now you can try the following examples. To display conversion factors, enter: you have: in you want: cm * 2.540000e+00 / 3.937008e-01 The output from the units command tells you to multiply the number of inches by 2.540000e+00 to get centimeters, and to multiply the number of centimeters by 3.937008e-01 to get inches. These numbers are in standard exponential notation, so 3.937008e-01 means 3.937008 x 10-1, which is the same as 0.3937008. The second number is always the reciprocal of the first; for example, 2.54 equals 1/0.3937008. To convert a measurement to different units, enter: you have: 5 years you want: microsec * 1.577846e+14 / 6.337753e-15 The output shows that 5 years equals 1.577846 x 1014 microseconds, and that 1 microsecond equals 6.337753 x 10-15 years. To give fractions in measurements, enter: you have: 1|3 mi you want: km * 5.364480e-01 / 1.864114e+00 The | (vertical bar) indicates division, so 1|3 means one-third. This shows that one-third mile is the same as 0.536448 kilometers. To include exponents in measurements, enter: you have: 1.2-5 gal you want: floz * 1.536000e-03 / 6.510417e+02 The expression 1.2-5 gal is the equivalent of 1.2 x 10-5. Do not type an e before the exponent. This example shows that 1.2 x 10-5 (0.000012) gallons equal 1.536 x 10-3 (0.001536) fluid ounces. To specify complex units, enter: you have: gram centimeter/second2 you want: kg-m/sec2 * 1.000000e-05 / 1.000000e+05 The units gram centimeter/second2 mean "grams x centimeters/second2." Similarly, kg-m/sec2 means "kilograms x meters/sec2," which is often read as "kilogram-meters per seconds squared". If the units you specify after you have and you want are incompatible, enter: you have: ft you want: lb conformability 3.048000e-01 m 4.535924e-01 kg The message conformability means the units you specified cannot be converted. Feet measure length, and pounds measure mass, so converting from one to the other does not make sense. Therefore, the units command displays the equivalent of each value in standard units. In other words, this example shows that 1 foot equals 0.3048 meters and that 1 pound equals 0.4535924 kilograms. The units command shows the equivalents in meters and kilograms because the command considers these units to be standard measures of length and mass. Entering <Ctrl-d> causes you to exit from the units Contains units and their conversion values. [ Back ]
{"url":"https://nixdoc.net/man-pages/Tru64/man1/units.1.html","timestamp":"2024-11-09T01:21:11Z","content_type":"text/html","content_length":"21347","record_id":"<urn:uuid:420766a1-bca7-41e4-9271-630def596584>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00714.warc.gz"}
The AstroStat Slog As Alanna pointed out, astronomers and statisticians mean different things when they say “model”. To complicate matters, we have also started to use another term called “data model”. First, there is the physical model, which could mean either our understanding of what processes operate on a source (the physics part, usually involving PDEs), or the mathematical function that describes the emission as a function of observables like location, time, or energy (the astronomy part, usually the shape of the spectrum, or the time evolution in a light curve, etc.) The data model on the other hand describes the organization of the observation. It is this which tells us that there is a fundamental difference between an effective area and a response matrix, and conversely, that the point spread function and the line response function are the same beast. This kind of thing, which I suppose is a computer science oriented view of the contents of a file, is crucial for implementing and running something like the Virtual Observatory. 2 Comments 1. hlee: Due to the binary nature of data model and computer science, interpreting data model into statistical one for the inference purpose comes smoothly, in contrast to astronomers’ model. The challenges lie in developing computer scientific theories, most of which can be associated with already existing theories from mathematical statistics. However, this is not always true when it comes to Information Theory. [Response: Hmm. I don't see how the data model can have relevance to statistical inference. It is not binary. It is essentially imposing an object-oriented approach which may help in the writing of generalized Bayesian code easier, but other than that, it doesn't have any connection to statistics. It might help to make your programs run better (maybe even faster) and be written in a more scalable fashion. 10-05-2007, 5:28 pm 2. hlee: I recommend a book by Thomas and Cover, Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing) and a paper by Shannon, A mathematical theory of communication. Particularly, the book is very good for general purposes (coding, data compression, signal/image processing, and filter design; I heard many CS/EE departments use this book in their required course works) and it contains quite many statistical theorems, which are the bases of developing a data model, or an object-oriented approach to write a code. [Added] Elements of Information Theory has its own website: http://www.elementsofinformationtheory.com/ Some years ago, I was able to find many course websites that said required textbook and contained problems, solutions, and relevant research topics including one of the authors’ course website at Stanford. A personal wish is that statistics departments offer Information Theory related courses with a cross opening to astronomy students. 10-09-2007, 6:01 pm
{"url":"https://hea-www.harvard.edu/AstroStat/slog/groundtruth.info/AstroStat/slog/2007/model-vs-model/index.html","timestamp":"2024-11-14T04:46:59Z","content_type":"application/xhtml+xml","content_length":"23321","record_id":"<urn:uuid:2d28303e-1fda-43b4-ad0c-fdbe234de619>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00267.warc.gz"}
Cost Volume Profit Analysis (CVP) - Datarails Cost-volume-profit analysis or CVP analysis, also known as break-even analysis, is a financial planning tool leaders use to create effective short-term business strategies. It conveys to business decision-makers the effects of changes in selling price, costs, and volume on profits (in the short term). Financial planning and analysis (FP&A) leaders commonly apply CVP to break-even analysis. Simply put, break-even analysis calculates how many sales it takes to pay for the cost of doing business to reach a break-even point (neither making nor losing money). Let’s take a sub-tastic example of cost-volume-profit analysis in action: Imagine you are opening a restaurant selling sub sandwiches. Through your research, you discover you can sell each sandwich for $5. But…you then need to know the variable cost. Finding the Variable Costs The variable cost is the cost of making the sandwich (the bread, mustard, and pickles). This cost is known as variable because it “varies” with the number of sandwiches you make. In our case, the cost of making each sandwich (each sandwich is considered a “unit”) is $3. Now, what is the contribution margin? Contribution margin is the amount by which revenue exceeds the variable costs of producing that revenue. The formula for the calculation of the contribution margin is: CM = Sales – Variable Costs Subtract the variable cost from the sale price ($5-the $3 in our sub example). This gives us the contribution margin. Therefore, in the case of our sandwich business, the contribution margin is $2 per unit/sandwich. Fixed Costs Now, we need to know fixed costs. These costs remain constant (in total) over some relevant output range. Fixed costs include things like rent and insurance. Whether the sandwich shop sells 50 subs or 50,000 subs, these costs stay the same. In our sandwich business example, let’s say our fixed costs are $20,000. The Meat of the Matter: Finding the Break-Even Point in Units (or Sandwiches) To find out the number of units that need to be sold to break even, the fixed cost is divided by the contribution margin per unit. Break-Even Units = Fixed Costs/Contribution Margin Per Unit So, $20,000 fixed costs divided by our contribution margin (2000/$2) means we need to sell 10,000 sandwiches If we do not want to lose money. Setting a Dollar Target For Breaking Even However, we will likely need to enter a sales dollar figure (rather than the number of units sold) on the register. This involves dividing the fixed costs by the contribution margin ratio. Break-Even Dollars = Fixed Costs/Contribution Margin Ratio For our sub-business, the contribution margin ratio is ⅖, or 40 cents of each dollar contributes to fixed costs. With $20,000 fixed costs/divided by the contribution margin ratio (.4), we arrive at $50,000 in sales. Therefore, we can break even if we ring up $50,000 in sales. Difference Between CVP Analysis and Break-Even Analysis Cost Volume Profit (CVP) analysis and break-even analysis are sometimes used interchangeably, but in reality, they differ because break-even analysis is a subset of CVP. CVP is a comprehensive analysis that examines the relationship between sales volume, costs, and profit to determine break-even points and profit targets. It considers various factors, such as: • Sales price • Costs • Sales mix Break-even analysis only identifies the sales volume required to break even. It is a subset of CVP analysis focused on finding a situation where total revenue equals total costs, resulting in zero profit or loss. It helps determine the minimum sales volume needed to cover costs. The real-world business dangers of CVP analysis The dangers of not doing a CVP analysis are instantly clear. In a real-world example, the founder of Domino’s Pizza, Tom Managhan, faced an early problem involving poorly calculated CVP in his book “Pizza Tiger”. The company was providing small pizzas that cost almost as much to make and just as much to deliver as larger pizzas. Because they were small, the company could not charge enough to cover its costs. At one point, the company’s founder was so busy producing small pizzas he did not have time to determine that the company was losing money on them. On a separate note, according to industry experts, real-time CVP analysis was crucial during COVID-19, particularly in industries such as hotels, just to keep the lights on. Plotting the CVP Graph The cost-volume-profit chart, often abbreviated, is a graphical representation of the cost-volume-profit analysis. In other words, it’s a graph showing the relationship between the cost of units produced and the volume produced using fixed costs, total costs, and total sales. It’s a clear and visual way to tell your company’s story and the effects when changing selling prices, costs, and volume. On the X-axis is “the level of activity” (for instance, the number of units). On the Y-axis, we place sales and total costs. The fixed cost remains the same regardless. The point where the total costs line crosses the total sales line represents the break-even point. This is the point of production where sales revenue will cover production costs. The above graph shows the break-even point is between 2000 and 3000 units sold. For FP&A leaders, this cost accounting method can show executives the margin of safety or the risk the company is exposed to if sales volumes decline. For instance, the CVP can show an executive that in an economic downturn, the company is at risk of losing money on sales of this product because it has a higher level of risk due to its lower margin of safety. In conjunction with other types of financial analysis, leaders use this to set short-term goals that will be used to achieve operating and profitability targets. CVP Analysis Limitations Like all analytical methodologies, CVP analysis has inherent limitations. This includes challenges for CVP analysts when identifying what should be considered a fixed cost and what should be classified as a variable cost. Some fixed costs do not remain fixed indefinitely. Costs that once seemed fixed, such as contractual agreements, taxes, and rents, can change over time. Further, assumptions made surrounding the treatment of semi-variable costs could be inaccurate. Therefore, having real-time data fed in with a solution such as Datarails is paramount. With that all said, for most, the best way to do a CVP analysis is to use Excel. To build a chart first input the “juicy data” (price, variable expenses, contribution margin). In the “insert” tab, choose a line chart. Select the data and edit the graph appropriately to change to correct the labels. ^This visual line chart tells your story clearly outlining revenue, fixed costs, and total expenses, and the breakeven point. Using Datarails, a Budgeting and Forecasting Solution CVP is a tried and tested method for businesses. Datarails is a budgeting and forecasting solution that integrates such spreadsheets with real-time data. It integrates fragmented workbooks and data sources into one centralized location. This lets you work in the comfort of Microsoft Excel with the support of a much more sophisticated but intuitive data management system. Plugging into your financial reports ensures this valuable data is updated in real-time. Everything you have built or can build in Excel is available through Datarails, which provides security and efficiency (and no more copying, formatting, and pasting). We do the mapping and updates (normally hours of manual work), and the system automatically updates. For instance, simple CVP analysis is automatically updated in a PDF presentation in real-time through Datarails. Did you learn a lot about cost-volume-profit analysis in this article? Here are three more to read next: Customer Acquisition Cost (CAC): Definitions, Formula, & More Excel vs. Google Sheets: Which One is Better? 6 Budget Monitoring Strategies to Integrate in Your Marketing Plan
{"url":"https://www.datarails.com/cost-volume-profit-analysis/","timestamp":"2024-11-06T05:20:13Z","content_type":"text/html","content_length":"192871","record_id":"<urn:uuid:1a976a43-0de6-46fc-b415-7893f13d2663>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00245.warc.gz"}
Counting Problems for Geodesics on Arithmetic Hyperbolic Surfaces It is a longstanding problem to determine the precise relationship between the geodesic length spectrum of a hyperbolic manifold and its commensurability class. A well-known result of Reid, for instance, shows that the geodesic length spectrum of an arithmetic hyperbolic surface determines the surface's commensurability class. It is known, however, that non-commensurable arithmetic hyperbolic surfaces may share arbitrarily large portions of their length spectra. In this paper we investigate this phenomenon and prove a number of quantitative results about the maximum cardinality of a family of pairwise non-commensurable arithmetic hyperbolic surfaces whose length spectra all contain a fixed (finite) set of non-negative real numbers. Repository Citation Linowitz, Benjamin. 2018. "Counting Problems for Geodesics on Arithmetic Hyperbolic Surfaces." Proceedings of the American Mathematical Society 146(3): 1347-1361. American Mathematical Society Publication Date Publication Title Proceedings of the American Mathematical Society Quaternion algebras, Commensurability, Theorem
{"url":"https://digitalcommons.oberlin.edu/faculty_schol/3903/","timestamp":"2024-11-11T00:24:19Z","content_type":"text/html","content_length":"33675","record_id":"<urn:uuid:1da15cd5-f03f-4f6d-af7a-39f7045cf8fd>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00621.warc.gz"}
Satellite Motion - Complete Toolkit 1. State and explain the meaning of Kepler’s three laws of planetary motion. 2. Explain the reason that a satellite can be thought of as a projectile that falls around the Earth instead of into it; and to compare and contrast a circular orbit with an elliptical orbit in terms of the force, acceleration and velocity vectors. 3. Use equations to calculate the orbital speed, orbital acceleration and orbital period for a satellite that orbits a central body of known mass a known distance away. 4. Discuss the meaning and the cause of weightlessness and to explain why an orbiting astronaut would experience weightless sensations. 5. Use an energy analysis to explain both the changes in speed and the constant speed of a satellite in an elliptical and a circular orbit. Readings from The Physics Classroom Tutorial Interactive Simulations 1. Orbital Motion The Orbital Motion Interactive simulates the elliptical motion of a satellite around a central body. The eccentricity of the orbit can be altered. Velocity and force vectors are shown as the satellite orbits. The Physics Classroom has prepared a classroom-ready activity for use by teachers with their classes. 2. Open Source Physics: Kepler System Model This robust model will let your students visualize all three of Kepler's Laws. The First and Second Law are especially well-modeled for the beginner, mostly because of the array of tools for viewing. You can show the planet's orbital plane, view the elliptical plane, trace orbits of both Earth and one other chosen planet, show a line-of-sight vector. To explore the Third Law, you can set your own parameters for a user-defined planet. All the computational data was taken from the National Space Science Data Center's fact sheets. The model provides two windows -- one simulates the Sun with Earth and one other planet orbiting; the second shows the view of Sun and Planet against the background stars as see from Earth. 3. Open Source Physics: Newton’s Mountain Model As Newton pondered, what would happen if you launched a projectile from a VERY tall mountain on Earth? This HTML sim lets you explore the idea of Newton's Mountain. The model is based on the diagram taken from Newton's "A Treatise on the System of the World", found in the Principia. Newton concluded that a projectile launched horizontally with sufficient speed would orbit Earth, rather than crashing back to the surface. You can set initial speed and launch angle, allow the projectile to pass through Earth, or designate Earth as a point mass. 4. The Physics Interactives: Elevator Ride The Elevator Ride Interactive is a simulation depicting the forces acting upon an elevator rider while ascending and descending. The emphasis on the Interactive is on communicating the sensations of weightlessness and weightiness experienced by a rider. The Physics Classroom has prepared a classroom-ready activity for use by teachers with their students. Background Information on Space Flight 1. NASA Jet Propulsion Lab: The Basics of Space Flight For teachers wanting a deep dive into the fundamentals of space exploration, this is your resource. Packaged by NASA’s Jet Propulsion Lab, this guide for non-physicists covers reference frames, the physics of gravity and planetary orbits, technologies, instrumentation, navigation, spacecraft design, and flight operations from takeoff to encounter. There's lots of video out there dealing with gravity, with lots of grievously wrong information flying around! This well-produced 45-minute video from The History Channel goes at the top of our list for these reasons: 1) It's not juvenile, but comprehensible for high school, 2) Noted scientists present the information in an engaging way, 3) Videography is beautiful, and 4) It knits together concepts of Newtonian gravitation, spacetime curvature, g forces, microgravity, and relationship of gravity and energy. Teachers: Don't let the first 5 minutes fool you - the video quickly escalates to a level appropriate for physics students. How can you build a cheap model to help students visualize gravitation on the cosmic scale? Easy.....spandex, tent poles, binder clips, marbles, ball bearings, and wooden balls. You may have already seen high school teacher Dan Burns' viral video "Gravity Visualized", where he shows how to use the freestanding model. Did you know you can also use it to model the Roche Limit and the Lab Materials List - Scroll to middle of page! Physicist Derek Muller again packs a lot of punch with this short video that debunks the myth that astronauts in the ISS are in a zero-gravity environment. As he explains, the space station orbits Earth at 400 km above the surface, so it is subject to the Law of Universal Gravitation (in other words, it is pulled toward Earth by gravity). The sensation of weightlessness is actually free fall! So why doesn’t the ISS crash into Earth? It’s traveling at an orbital velocity of 28,000 km/hr, so as it falls, the Earth curves away from it. There's lots of video out there dealing with gravity, with lots of grievously wrong information flying around! This well-produced 45-minute video from The History Channel goes at the top of our list for these reasons: 1) It's not juvenile, but comprehensible for high school, 2) Noted scientists present the information in an engaging way, 3) Videography is beautiful, and 4) It knits together concepts of Newtonian gravitation, spacetime curvature, g forces, microgravity, and relationship of gravity and energy. Teachers: Don't let the first 5 minutes fool you - the video quickly escalates to a level appropriate for physics students. A green planet orbits an orange star in two animations – one depicting uniform circular motion (circular orbit) and the other showing an elliptical orbit. The velocity vector is shown in blue and the acceleration vector in red. Takeaways: Acceleration in the elliptical orbit is far greater as the planet approaches its perihelion. Second, if there are no other planets or stars nearby, the acceleration of the planet is directed exactly toward the star whether the orbital motion is uniform or not. This animation closely resembles the Sun/Jupiter system. You can choose a mass ratio of 1000:1, 100:1, 10:1, 2:1, and 1:1. The screenshot shows 10:1. Kepler’s First Law tells us that the orbit of every planet is an ellipse with the Sun at one of the two foci. But how large does the mass of the Sun have to be to achieve this idealized planetary motion? And what happens to the kinetic energy of the system as a function of time? Kepler’s Second Law states that planets sweep out equal areas in their orbits in equal times. This animation displays total area swept out per unit of time. Ask students to choose any time increment (2 years, 3 years, 5 years) and watch the planet sweep out equal areas even though its orbital path is not uniform (it’s elliptical). This exploration with pdf student worksheet shows 10 identical planets orbiting a star. You can change initial position or initial velocity of the planets and watch what happens to the orbits. Takeaways: If the initial velocity is too low, the inner planets will collide with the star. As you increase the initial velocity, the orbits become more uniform (circular). But there’s a tipping point where a too-high initial velocity can send planets shooting off into space, escaping the star’s gravitation. The motion of the planets as seen from the reference frame of the Sun is pretty simple. But from the perspective of each individual planet (the geocentric reference frame), it gets complicated. This animation explores both reference frames for a system involving a star, an inner planet, and an outer planet. Takeaway: Students will gain appreciation of how difficult it must’ve been for early astronomers to figure out that the planets revolved around the Sun. It also prepares learners to understand that it’s impossible to describe astronomical motion unless you first decide upon a frame of reference. Labs and Investigations 1. The Physics Classroom, The Laboratory, Satellite Motion Simulation Students explore circular and elliptical motion of satellites and compare and contrast the direction and magnitude of the force and velocity vector for these two types of orbits. 2. The Physics Classroom, The Laboratory, The Law of Harmonies Analysis Students are provided data for the orbital radius (in a.u. units) and orbital period (in Earth-year units) for the planets and analyze the data to provide evidence for Kepler's third law of planetary motion. 3. The Physics Classroom, The Laboratory, Jupiter's Moons Analysis Students analyze orbital period and orbital radius data for several moons of Jupiter in order to determine if Kepler's third law of planetary motion also apply to satellites in general. 4. The Physics Classroom, The Laboratory, Mass of Saturn Analysis Students use Kepler's third law and orbital data for the moons of Saturn in order to determine the mass of Saturn. 5. The Physics Classroom, The Laboratory, The Mini Drop Lab Students use a force scale and a 1-kg mass to explore the strength of the upward force on an object as it free falls and as it is brought to a stop after free fall. Link: http://www.physicsclassroom.com/lab#circ Video Analysis Exercise 1. Tracker Video Analysis: Angry Birds in Space What type of force is exerted on Angry Birds in Space? This resource set explores the physics behind the video game designed by Rovio, makers of Angry Birds. The package includes zip files of Angry Birds in Space video clips and Tracker files that allow you to perform precise video analysis of the motion. Also included is the blog entry that goes with this topic, a video, and a link to download and run the free Tracker analysis tool. Even if you can't access Java in computer lab, author Rhett Allain's pdf document gives you all the background you need to set it up yourself in the classroom. Elsewhere on the Web 1. Classroom Lesson Module: Give Me A Boost – Gravity Assist What type of force is exerted on Angry Birds in Space? This resource set explores the physics behind the video game designed by Rovio, makers of Angry Birds. The package includes zip files of Angry Birds in Space video clips and Tracker files that allow you to perform precise video analysis of the motion. Also included is the blog entry that goes with this topic, a video, and a link to download and run the free Tracker analysis tool. Even if you can't access Java in computer lab, author Rhett Allain's pdf document gives you all the background you need to set it up yourself in the classroom. Minds On Physics Internet Modules: The Minds On Physics Internet Modules are a collection of interactive questioning modules that target a student’s conceptual understanding. Each question is accompanied by detailed help that addresses the various components of the question. 1. Circular Motion and Gravitation, Ass’t CG8 - Satellite Motion 2. Circular Motion and Gravitation, Ass’t CG9 - Weightlessness 3. Circular Motion and Gravitation, Ass’t CG10 - Kepler's Laws of Planetary Motion Concept Building Exercises: 1. The Curriculum Corner, Circular Motion and Gravitation, Satellite Motion 2. The Curriculum Corner, Circular Motion and Gravitation, Weightlessness 3. The Curriculum Corner, Circular Motion and Gravitation, Kepler's Laws of Planetary Motion Problem-Solving Exercises: 1. The Calculator Pad, ChapterGoesHere, Problems #19 - #27 Link: http://www.physicsclassroom.com/calcpad/circgrav/problems Science Reasoning Activities: 1. Kepler's Law of Harmonies Link: http://www.physicsclassroom.com/reasoning/circularmotion Common Misconception: 1. Satellites as Projectiles Students at first find it difficult to believe that artificial satellites are projectiles. They figure that there must be a force other than gravity acting upon them. While such satellites may on occasion use thrust to fine-tune their orbits, they are essentially projectiles that fall towards the Earth relative to their inertial path without ever getting any closer to the earth. When provided the logic behind this concept, they tend to buy in to the essential nature of satellite motion. 2. Causes of Weightlessness Ask a student why astronauts feel weightless and you will undoubtedly hear the response that there is no gravity acting upon them. A good deal of poor word choice for numerous years has likely contributed to this misconception. References to "microgravity" and "zero-gravity" may not be the best choice of terms to describe the gravitational conditions of orbiting astronauts as students assign different meaning to these terms as was intended by those who use them. The result is often some very strong conceptions about what is meant by weightlessness. It is important to explain to students that weightlessness does not result when there is no gravity force but rather results when the only force present is the gravity force. A. Next Generation Science Standards (NGSS) Performance Expectations • Forces and Motion HS-PS2-4: Use mathematical representations of Newton’s Law of Gravitation to describe and predict the gravitational forces between objects. • Forces and Motion HS-PS2-2: Use mathematical representations to support the claim that the total momentum of a system of objects is conserved when there is no net force on the system. • Energy HS-PS3-1: Create a computational model to calculate the change in the energy of one component in a system when the change in energy of the other component(s) and energy flows in and out of the system are known. Disciplinary Core Ideas • Forces and Motion-Types of Interactions HS-PS2.B.i Newton’s Law of Universal Gravitation and Coulomb’s Law provide the mathematical models to describe and predict the effects of gravitational and electrostatic forces between objects. • Forces and Motion-Types of Interactions HS-PS2.B.ii Forces at a distance are explained by fields (gravitational, electric, and magnetic) permeating space that can transfer energy through space. Magnets or electric currents cause magnetic fields; electric charges or changing magnetic fields cause electric fields. • Forces and Motion HS-PS2.A.ii Momentum is defined for a particular frame of reference; it is the mass times the velocity of the object. • Forces and Motion HS-PS2.A.iii If a system interacts with objects outside itself, the total momentum of the system can change; however, any such change is balanced by the changes in the momentum of objects outside the system. • Conservation of Energy HS-PS3.B.i Conservation of energy means that the total change of energy in any system is always equal to the total energy transferred into or out of the system. • Conservation of Energy HS-PS3.B.ii Mathematical expressions, which quantify how the stored energy in a system depends on its configuration and how kinetic energy depends on mass and speed, allow the concept of conservation of energy to be used to predict and describe system behavior. • Relationship Between Energy and Forces HS-PS3.C.i When two objects interacting through a field change relative position, the energy stored in the field is changed. NGSS Engineering and Technology Standards (ETS) • High School-ETS1.A.i When evaluating solutions it is important to take into account a range of constraints including cost, safety, reliability, and aesthetics and to consider social, cultural, and environmental impacts. • High School-ETS1.A.ii Both physical models and computers can be used in various ways to aid in the engineering design process. Computers are useful for a variety of purposes, such as running simulations to test different ways of solving a problem or to see which one is most efficient or economical; and in making a persuasive presentation to a client about how a given design will meet his or her needs. Crosscutting Concepts Scale, Proportion, and Quantity • High School: The significance of a phenomenon is dependent on the scale, proportion, and quantity at which it occurs. Systems and System Models • High School: When investigating or describing a system, the boundaries and initial conditions of the system need to be defined and their inputs and outputs analyzed and described using models. • High School: Models can be used to predict the behavior of a system, but these predictions have limited precision and reliability due to the assumptions and approximations inherent in models. Nature of Science: Order & Consistency in Natural Systems • High School: Scientific knowledge is based on the assumption that natural laws operate today as they did in the past and they will continue to do so in the future. • High School: Science assumes the universe is a vast single system in which basic laws are consistent. • High School: Science assumes that objects and events in natural systems occur in consistent patterns that are understandable through measurement and observation. Science and Engineering Practices Practice #1: Analyzing and Interpreting Data • High School: Analyze data using tools, technologies, and/or models (e.g., computational, mathematical) in order to make valid and reliable scientific claims Practice #2: Developing and Using Models • High School: Develop and use a model based on evidence to illustrate the relationships between systems or between components of a system. (Strong alignment) • High School: Use a model to provide mechanistic accounts of phenomena. (Strong alignment) Practice #3: Planning and Carrying Out Investigations • High School: Plan and conduct an investigation individually and collaboratively to produce data to serve as the basis for evidence. Practice #4: Analyzing and Interpreting Data • High School: Construct an explanation based on valid and reliable evidence obtained from a variety of sources (including students’ own investigations, models, theories, simulations) and the assumption that theories and laws that describe the natural world operate today as they did in the past and will continue to do so in the future. Practice #5: Using Mathematics and Computational Thinking • High School: Create or revise a computational model or simulation of a phenomenon, process, or system. (Strong alignment) • High School: Use mathematical representations of phenomena or design solutions to describe and/or support claims and/or explanations. (Strong alignment) B. Common Core Standards for Mathematics – Grades 9-12 Standards for Mathematical Practice: • Reason abstractly and quantitatively • Model with mathematics • Make sense of problems and persevere in solving them • High School N-Q.1: Use units as a way to understand problems and to guide the solution of multi-step problems; choose and interpret units consistently in formulas; choose and interpret the scale and the origin in graphs and data displays. • High School A-SSE.1. Interpret parts of an expression, such as terms, factors, and coefficients. • High School A-SSE.2 Use the structure of an expression to identify ways to rewrite it. • High School A-CED.2 Create equations in two or more variables to represent relationships between quantities • High School A-CED.4 Rearrange formulas to highlight a quantity of interest, using the same reasoning as in solving equations. • High School F-IF.4 For a function that models a relationship between two quantities, interpret key features of graphs and tables in terms of the quantities, and sketch graphs showing key features given a verbal description of the relationship. • High School F-IF.6 Calculate and interpret the average rate of change of a function (presented symbolically or as a table) over a specified interval. Estimate the rate of change from a graph. Linear, Quadratic, and Exponential Models • High School F-LE.1.b Recognize situations in which one quantity changes at a constant rate per unit interval relative to another. • High School F-LE.5 Interpret the parameters in a linear or exponential function in terms of a context. • High School G-C.5 Derive using similarity the fact that the length of the arc intercepted by an angle is proportional to the radius, and define the radian measure of the angle as the constant of • High School G-GPE.3 Derive the equations of ellipses and hyperbolas given the foci, using the fact that the sum or difference of distances from the foci is constant. C. Common Core Standards for English/Language Arts (ELA) – Grades 9-12 Key Ideas and Details • High School RST11-12.3 Follow precisely a complex multistep procedure when taking measurements or performing technical tasks; analyze the specific results based on explanations in the text. • High School RST.11-12.2 Determine the central ideas or conclusions of a text; summarize complex concepts, processes, or information presented in a text by paraphrasing them in simpler but still accurate terms. Integration of Knowledge and Ideas • High School RST.11-12.7 Integrate and evaluate multiple sources of information presented in diverse formats and media (e.g., quantitative data, video, multimedia) in order to address a question or solve a problem. • High School RST.11-12.9 Synthesize information from a range of sources (e.g. texts, experiments, simulations) into a coherent understanding of a process, phenomenon, or concept. Range of Reading and Level of Text Complexity • High School RST.11-12.10 By the end of Grade 10, read and comprehend science/technical texts in the Grade 9-10 text complexity band independently and proficiently.
{"url":"https://staging.physicsclassroom.com/Teacher-Toolkits/Satellite-Motion/Satellite-Motion-Complete-ToolKit","timestamp":"2024-11-11T06:50:19Z","content_type":"application/xhtml+xml","content_length":"232325","record_id":"<urn:uuid:78585295-b031-46e3-b064-a16c490d128b>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00322.warc.gz"}
Pull out your math skills Now calculate your calories in a method used by Nancy Clark,... Pull out your math skills Now calculate your calories in a method used by Nancy Clark,... Pull out your math skills Now calculate your calories in a method used by Nancy Clark, Sports Nutritionist, example: Take your weight = 120lbs as an example Multiply this 120 lbs. by 10, 120 x 10 = 1200 calories which is the number of calories your body expends for its BMR - Basal Metabolic Rate. The BMR is energy needed to sustain the metabolic activities of the cells and tissues and to maintain circulatory, respiratory, and gastrointestinal and renal processes. Simply, how many calories it takes to sleep all day. Now, take the 1200 calories and divide it by 2 or 1/2 of 1200 calories for your adult daily activities = 600 calories That is what we would refer to as normal stuff: going to school, grocery store, studying, etc. 1200 + 600 = 1800 calories /day Next: If you do purposeful exercise then you add ~400-600 calories per hour. I will say 400 calories as I workout. Unless you workout everyday nonstop 60 solid minutes. The calories now equal 1200 + 600 + 400 = 2200 calories There are many ways to calculate your caloric intake use another known method. Submit both calorie results and label them. You can calculate your calorie intake by using the calorie chart. A calorie chart is a chart which tells you about the calorie which you took for a certain amount of food. You can get calorie chart on internet,for ex: Now by using it calculate how much calorie you take in. Step 1- find out your calorific baseline,or your weight. Step2-multiply by 11 Step 3- add 400 maintenance calorie Step 4- subtract 600 calorie from it And you will be able to find out your calorie intake
{"url":"https://justaaa.com/math/176492-pull-out-your-math-skills-now-calculate-your","timestamp":"2024-11-04T04:33:10Z","content_type":"text/html","content_length":"32395","record_id":"<urn:uuid:427f8f6c-165a-49ce-940c-7c5f49a570bc>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00006.warc.gz"}
How to Use Nested IF Function in Google Sheets (4 Helpful Ways) When we use a function within another function, it is called a nested function. The privilege of using nested functions in Google Sheets and Excel makes them very dependable for various calculations. This article will discuss how you can use the nested IF function in Google Sheets. The IF function is one of the most popular functions in Google Sheets. Here, we will use multiple IF functions along with the AND and OR functions. Also, we’ll discuss several alternatives to the nested IF function. A Sample of Practice Spreadsheet You can copy our practice spreadsheets by clicking on the following link. The spreadsheet contains an overview of the datasheet and an outline of the described ways to use the nested IF function. 4 Helpful Ways to Use Nested IF Function in Google Sheets In this section, we’ll show you 4 helpful ways to use the nested IF function in Google Sheets. We will also use the OR and AND functions along with multiple IF functions. 1. Using Multiple IF Functions First, let’s create nested IF functions only using multiple IF functions. For that, we will demonstrate the following two examples. 1.1 Calculating Students’ Final Grade In this example, we will calculate grades for several students of a class based on their marks, using the nested IF function. • First, select Cell D5. • After that, type in the following formula- • Finally, press Enter key to get the required Grade. Formula Breakdown • IF(C5>=90,$G$5,IF(C5>=80,$G$6,IF(C5>=70,$G$7,IF(C5>=60,$G$8,IF(C5>=50,$G$9,$G$10))))) The first IF function checks whether the score in Cell C5 is greater than or equal to 90. If the logical test is True then it returns the content of Cell G5. Else, it moves on to the next IF function. The same process goes on for other IF functions as well. • Now, hover your mouse pointer above the bottom right corner of the selected cell. • The Fill Handle icon will be visible. Use this icon to copy the formula to other cells. • Thus, you can calculate grades using the nested IF function. Read More: How to Use IF Function in Google Sheets (6 Suitable Examples) 1.2 Estimating Sales Commission Now, for another example, we’ll estimate yearly sales commissions received by employees of a company using the nested IF function. • Select Cell D5 first. • Afterward, type in the following formula- • Then, press Enter key to get the required amount of commission value. Formula Breakdown • (IF(C5>$B$16,$C$16,IF(C5>$B$15,$C$15,IF(C5>$B$14,$C$14,$C$13))))*C5 The first IF function runs the logical test of whether the value in Cell C5 is greater than the value in Cell B16. If the logical test is True then it returns the content of Cell C16. Else, it moves on to the next IF function. A similar process goes on for the remaining IF functions too. The returned value by the nested IF function is finally multiplied by the value in Cell C5 to estimate the commission amount. • Finally, use the Fill Handle icon to copy the formula to other cells as well. Read More: How to Use Multiple IF Statements in Google Sheets (5 Examples) 2. Joining IF and AND Functions We can combine the AND function with multiple IF functions to form a nested statement that will calculate values based on mutually inclusive dependent conditions. We will assess students’ result status based on their marks and attendance. • To start, select Cell E5 and then type in the following formula- • Now, press the Enter key to get the required status. Formula Breakdown This AND function checks whether both the given criteria are true or not. It returns True if both the criteria are true. Else, it returns False. • IF(AND(C5>=90,D5>=80%),$D$13,IF(AND(C5>=50,D5>=80%),$D$14,$D$15)) If the logical test performed by using the first AND function is True, then the first IF function returns the value in Cell D13. Else, it moves on to the second IF function. The second IF function works similarly to the first one. It returns the value in Cell D14 if the logical test is true. Else, it returns the value in Cell D15. • Finally, use the Fill Handle icon to copy the formula in other cells as well. 3. Uniting IF and OR Functions Executing this method is very similar to the previous example, except the criteria used in this method will be mutually exclusive. We’ll enumerate students’ result status based on attendance and marks from two tests this time. • First, select Cell F5. • Then, type in the following formula- • Afterward, press the Enter key to get the required result. Formula Breakdown This OR function checks whether any of the given criteria is true or not. It returns True if any of the criteria is true. Else, it returns False. • IF(OR(C5<50,E5<80%),$E$14,IF(OR(D5<50,E5<80%),$E$14,$E$13)) If the logical test performed by using the first OR function is True, then the first IF function returns the value in Cell E14. Else, it moves on to the second IF function. The second IF function works similarly to the first one. It returns the value in Cell E14 if the logical test is true. Else, it returns the value in Cell E15. • Finally, use the Fill Handle icon to copy the formula in other cells too. Read More: How to Use IF and OR Formula in Google Sheets (2 Examples) 4. Combining IF, OR, and AND Functions The OR and AND functions can be combined with multiple IF functions in a single nested function. For this method, we’ll enumerate students’ result statuses based on marks in two tests and their attendance. If a student gets more than 50 marks on both tests and has an attendance of 80% or more, his result status will be “Good”. On the other hand, if a student gets 50 or more marks in only one test and has an attendance of 80% or more, his result status will be “Satisfactory”. Students with any other category will have a “Withheld” status. • To start, select Cell F5 first. • Afterward, type in the following formula- • Finally, press Enter key to get the required result. Formula Breakdown This OR function checks whether any of the given criteria is true or not. It returns True if any of the criteria is true. Else, it returns False. • AND(OR(C5>=50,D5>=50),E5>=80%) This AND function returns True if the OR function returns True and the other criterion E5>=80% is also true. Else, it returns False. • IF(AND(C5>=50,D5>=50,E5>=80%),$E$13,IF(AND(OR(C5>=50,D5>=50),E5>=80%),$E$14,$E$15)) The first IF function returns the value in Cell E13 if the logical test performed using the AND function is true. Else, it moves on to the second IF function, which works similarly. If both the IF function logical tests are false then value in Cell E15 is returned. • Now, use the Fill Handle icon to copy the formula in the other cells of Column F. Alternatives to Nested IF Function in Google Sheets There are a few alternatives to the nested IF function in Google Sheets. Here, we’ll discuss 3 alternatives that are much simpler to use compared to the nested IF function. We’ll use these functions to calculate yearly sales commissions from the following dataset. The required result was previously calculated using the nested IF function in Example 1.2 of the first method in the previous section Read More: How to Use Nested IF Statements in Google Sheets (3 Examples) 1. Applying IFS Function The nested IF function is usually employed to incorporate multiple conditions and subsequent values. The IFS function can be used as an alternative in such scenarios. • First, select Cell D5. • Afterward, type in the following formula- • Now, press Enter key to get the required value. • Finally, use the Fill Handle icon to copy the formula to other cells. 2. Employing VLOOKUP Function Another alternative to the nested IF function is the VLOOKUP function which can search through a range for any search key. Although, one must remember that the search key has to be in the first column of the provided range. • To start with, select Cell D5. • Then, type in the following formula- • Finally, press Enter key to get the required value. • use the Fill Handle icon to copy the formula to other cells. Read More: How to Use VLOOKUP for Conditional Formatting in Google Sheets 3. Implementing CHOOSE Function We can also use the CHOOSE function as an alternative to the nested IF function. The CHOOSE function can return an element from a list of choices. • In the beginning, select Cell D5. • Afterward, type in the following formula- • Now, press Enter key to get the required result. • In the end, use the Fill Handle icon to copy the formula to other cells of Column D. Things to Be Considered • There are no false statements in the arguments of the IFS function. • The search key has to be in the first column of the range while using the VLOOKUP function. This concludes our article on how to use the nested IF function in Google Sheets. I hope the article was sufficient for your requirements. Feel free to leave your thoughts on the article in the comment section. Visit our website OfficeWheel.com for more helpful articles. Related Articles We will be happy to hear your thoughts Leave a reply
{"url":"https://officewheel.com/nested-if-function-google-sheets/","timestamp":"2024-11-11T08:26:27Z","content_type":"text/html","content_length":"195469","record_id":"<urn:uuid:2e24905c-7590-4993-99dc-c4fa9ff2fa4c>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00813.warc.gz"}
Causal Graph Dynamics and Kan Extensions October 14, 2024 Antoine Spicher (LACL) On the one side, the formalism of Global Transformations comes with the claim of capturing any transformation of space that is local, synchronous and deterministic. The claim has been proven for different classes of models such as mesh refinements from computer graphics, Lindenmayer systems from morphogenesis modeling, and cellular automata from biological, physical and parallel computation modeling. The Global Transformation formalism achieves this by using category theory for its genericity, and more precisely the notion of Kan extension to determine the global behaviors based on the local ones. On the other side, Causal Graph Dynamics describe the transformation of port graphs in a synchronous and deterministic way. In this work, we show the precise sense in which the claim of Global Transformations holds for them as well. This is done by showing different ways in which they can be expressed as Kan extensions, each of them highlighting different features of Causal Graph Dynamics. Along the way, this work uncovers the interesting class of Monotonic Causal Graph Dynamics and their universality among General Causal Graph Dynamics.
{"url":"https://www.lacl.fr/en/seminar/causal-graph-dynamics-and-kan-extensions/","timestamp":"2024-11-04T22:01:11Z","content_type":"text/html","content_length":"34838","record_id":"<urn:uuid:1359ed91-f068-4df5-9a0d-465249e9378b>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00218.warc.gz"}