content
stringlengths
86
994k
meta
stringlengths
288
619
A Near Optimal QoE-Driven Power Allocation Scheme for SVC A Near Optimal QoE-Driven Power Allocation Scheme for SVC-Based Video Transmissions Over MIMO Systems Xiang Chen1, Jenq-Neng Hwang1, Chiung-Ying Wang2, Chung-Nan Lee3 Department of Electrical Engineering, Box 352500, University of Washington, Seattle, WA 98195, USA. Email: {xchen28, hwang}@uw.edu Department of Information Management, TransWorld University, Yunlin, Taiwan, ROC. Email: ann@mail.twu.edu.tw Department of Computer Science and Engineering, National Sun Yat-Sen University, Kaohsiung, Taiwan, ROC. Email: cnlee@cse.nsysu.edu.tw Abstract²In this paper, we propose a near optimal power allocation scheme, which maximizes the quality of experience (QoE), for scalable video coding (SVC) based video transmissions over multi-input multi-output (MIMO) systems. This scheme tries to optimize the received video quality according to video frame-error-rate (FER), which may be caused by either transmission errors in physical (PHY) layer or video coding structures in application (APP) layer. Due to the complexity of the original optimization problem, we decompose it into several sub-problems, which can then be solved by classic convex optimization methods. Detailed algorithms with corresponding theoretical derivations are provided. Simulations with real video traces demonstrate the effectiveness of our proposed scheme. Keywords²power allocation; QoE; SVC; MIMO; convex Due to the exponentially increasing demands of wireless multimedia applications; offering higher quality video transmissions over wireless environments becomes an everlasting endeavor for multimedia service providers [1]. However, the error prone and band-limited nature of wireless channels creates obstacles for these bandwidth consuming applications [2]. Multi-input multi-output (MIMO) technology, which can provide more reliable and efficient wireless communications, has been considered as one of the solutions for better wireless video delivery [3]. Among plenty of existing MIMO techniques, spatial multiplexing (SM) approach, which simultaneously transmits independent data streams on each antenna to achieve higher spectral efficiency [4], is suitable for high data rate video transmissions. Scalable video coding (SVC) is another attractive technique in wireless video transmissions. Videos can be encoded with different temporal (frame rates), spatial (picture resolutions) and quality (image fidelity) scalabilities [5]. Parts of the encoded bit streams (higher enhancement layers) can be removed and the resulting substreams (base layer and lower enhancement layers) can still form another valid bit streams with lower resource consumptions but lower video qualities [6]. Therefore, SVC provides a way for adaptive switch between different video qualities according to different available resources or channel conditions at the user end. Significant amount of researches have been conducted in The research is based on work supported by the Ministry of Economic Affairs (MOEA) of Taiwan, under Grant number MOEA 102-EC-17-A-03-S1-214. transmitting SVC-based videos over MIMO-SM systems, which require jointly optimizations of physical (PHY) layer structures and video characteristics in application (APP) layer. Antenna selection is one of the major techniques to improve the video qualities. For instance, an adaptive channel selection (ACS) scheme has been proposed in [7]. In this system, bit streams with higher priorities will be transmitted through the antennas with higher signal-to-noise ratios (SNRs). A crosslayer dynamic antenna selection (CLDAS) scheme [8] is designed to jointly optimize the rate-distortion characteristics of source-channel encoding and the multiplexing-diversity tradeoff to mitigate the end-to-end video distortion. Power allocation has also been adopted in video transmissions with MIMO-SM techniques. A maximumthroughput delivery of SVC-based video over MIMO systems has been proposed in [1], in which the traditional capacityachieving water-filling (WF) algorithm [9] is improved when discrete modulation levels are considered in real applications. However, this scheme is targeted on improving the throughput of the system, which may not directly reflect quality of experience (QoE) of users. In [10], the proposed power allocation scheme can enhance the quality of SVC video streaming over MIMO systems by a modified WF (M-WF) algorithm such that unequal error protection (UEP) is applied by setting different bit-error-rate (BER) requirements on base layer and enhancement layers. Nevertheless, due to the empirical nature, the fixed BER requirements may not be optimal in different channel conditions. Transmission errors such as damaged or lost packets will degrade video qualities [11]. If SVC-based videos are transmitted, decoding errors in base layer frames will cause propagation errors in the corresponding enhancement layer frames. Moreover, directly minimizing BER in PHY layer will not necessarily minimize video frame error rate (FER) in APP layer due to different packet sizes and video coding structures. These characteristics motivate us to develop an effective power allocation scheme to optimize the received video QoE. In this paper, we propose a near optimal QoE-driven power allocation scheme for SVC-based video transmissions over MIMO-SM systems. Our proposed scheme tries to maximize the overall video quality based on video FER where video packet sizes, SVC layer structures and PHY layer BERs for different modulations are jointly considered. Due to the complexity of this optimization problem, we decompose it into several sub- problems which can then be solved by classic convex optimization methods. Detailed algorithms for searching the optimal solutions and its corresponding theoretical analyses are provided. Moreover, simulations with real SVC-based video traces are conducted to demonstrate the effectiveness of our proposed scheme. This paper is organized as follows. In the next section, system overview including SVC-based video coding and MIMO-SM system are described. Problem formulations are provided in Section III. In Section IV, we will describe our proposed optimization search algorithms together with theoretical analyses. Simulation results and conclusion remarks are given in Section V and VI respectively. Notations: Upper (lower) boldface letters are used for matrices (column vectors). diag(h) is a diagonal matrix with the elements of h sitting on the diagonal. 1 denotes a column vector all of whose components are one. (.)H means the Hermitian. (.)T is the transpose. IN denotes the N×N identity matrix. dom f means the domain of function f. A. SVC-Based Video An SVC encoded video consists of base and enhancement layers in a hierarchical dependency structure, where the video layers with higher qualities can be processed when the corresponding low-quality video layers are successfully decoded. Therefore, the base layer is mandatory to decode all the other enhancement layers [12][13]. SVC can support all of the temporal, spatial and quality scalabilities. In this paper, we only consider videos encoded with quality scalability. However, the similar idea can be applied to videos with temporal or spatial scalabilities. The QoEs at user ends are normally measured in utility values [14]. In order to maximize the overall utilities of the recovered videos, we choose a perceptual quality model [15] for SVC-based videos with quality scalability: c 1 q q e 1 min ° ® c1ql qmin c1ql 1 qmin e ,l 1 ,l t 2 where c is a video dependent model parameter; ql is the quantization stepsize of the lth video layer; and qmin is the minimum quantization stepsize which correspond to the video layer with highest quality. B. Proposed System Structure The proposed MIMO-SM system for SVC-based video transmissions is shown in Fig. 1. A video sequence is encoded into one base layer and L-1 enhancement layers, which are fed into a MIMO system with Nt (Nt /) transmitter antennas and Nr receiver antennas. At the transmitter side, an adaptive channel selection (ACS) module [1][7] is implemented so that video layers with higher importance, such as base layer, are transmitted through the channels (antennas) with higher SNR. The power allocation module allocates appropriate power to modulated symbols on each channel based on the cross-layer video information and channel state information (CSI) fed back from receiver side so that the overall utility is maximized. After precoding, the data symbols are transmitted through wireless channels. At the receiver side, a channel estimation module sends CSIs back to the transmitter side. In this paper, we assume the CSIs containing full channel knowledge are fed back without any estimation error and delay. Similar as in [1], we assume the channel selection sequences and modulation schemes are known at the receiver side through control channel. After decoding, detection, demodulation and channel selection, the received bit-streams are fed into SVC decoder for video reconstruction. We assume no error concealment techniques are applied in the system. Therefore, video frames with any single bit error are dropped. Moreover, if the lth layer is not successfully decoded, all higher layers (i.e., from l+1 to L) of this frame are also dropped. C. MIMO Channel Model The system equation can be described as: Hx n , where y is an Nr×1 complex received signal vector, x is an Nt×1 complex transmitted symbol vector with E[xxH]=diag(p) =diag(p1, p2« pNt), subject to normalized power 1Tp=1 and each element in p is not less than 0. n is an Nr×1 independent and identically distributed (i.i.d.) complex additive white Gaussian noise (AWGN) vector. H is Nr×Nt channel matrix in Fig. 1. Proposed MIMO System for SVC-Based Video Transmissions which all elements are i.i.d. circularly symmetric complex Gaussian (ZMCSCG) random variables with zero mean and variance 1, i.e., ^0,1` . Therefore, the average SNR of sk · § l max ¦ ul ¨ 1 Pbk pk ¸ subject to pk t 0; ¦ k 1 pk the system is ʌ=1Tp /N0=1/N0. MIMO channel matrix H can be decomposed by the singular value decomposition (SVD): H U/V H , where U and V are unitary matrices. ȁ is a diagonal matrix specified as: O1 , O2 ,..., OR ,0,...,0 , where R=min(Nr, Nt) is the rank of channel matrix H, and Ȝ1 Ȝ2 « ȜR are eigenvalues of H H H . If correct and full channel knowledge is available at both transmitter and receiver sides, the symbols are precoded with V before transmission and decoded with UH at receiver. Therefore, the received signal before detection can be expressed as: UH HVx UH n /x n . Since U is a unitary matrix, each element of n=U n still follows the complex Gaussian distribution, i.e., 0, N 0I N . It is clear that by using precoder and decoder, a MIMO channel can be decomposed into R independent single-input single-output (SISO) channels [1]. In error prone wireless channels, the receiver bit error rate (BER) of M-QAM can be approximated as [16]: 1 · 2 ¨1 ¸ § M ¹ ¨ Pb | log 2 M § 3log 2 M M 1 ¸· 2E ¸ , ¸ N 0 ¸¸ where pl is the lth element of power vector p. Suppose there are sl bits in total for transmitting the lth layer of a single video frame. The FER of layer l can be calculated as: fl p 1 1 Pbk pk . k 1 Our optimization problem is to maximize the system utility based on video frame error rate (FER) subject to certain power constraints: Here, we consider a linear mapping between utilities and FER for simplicity. In fact, the actual utility model, as a decreasing function of FER, can vary when different error concealment techniques are applied. Thus, minimizing FER will be more general in real applications. Furthermore, directly solving optimization problem in Eq. (9) is not an easy task. Therefore, we decompose Eq. (9) into L subproblems: if up to the lth layer is allowed to be transmitted, the corresponding frame correction rate of layer l, denoted as f l p 1 f l p , can be optimized: min log fl p p ¦ s log 1 Pbk pk (10) k 1 subject to pk t 0; k 1 Note that in this case, pl+1=pl+2 « pL=0 are implied since the layers higher than l are not allowed to be transmitted. If p*l denotes the solution of the l sub-problem in Eq. (10), in our proposed scheme, the solution of Eq. (9) is found by: p* arg max uk f k p*l . Please note that as the original problem is solved by finding the best among solutions of the sub-problems, the solution of Eq. (11) is a near-optimal solution of the original problem. In Section V, we will demonstrate the effectiveness and the near-optimality of the proposed scheme comparing with the global optimal points obtained by exhaustive searches. where M is the number of constellation points; Q(.) is the complementary error function, and Eb /N0 is the average bit energy to average noise power ratio. Since SNR can be calculated from Eb /N0, i.e. SNR=log2(M)×Eb/N0, in our proposed MIMO-SM system, BER for the lth channel can be derived as: 1 · 2 ¨1 ¸ M l ¸¹ § § 3 · Pbl pl | Q¨ ¨ ¸ UOl pl ¸ ¨ © M l 1 ¹ log 2 M l A. Log-Concavity of Objective Functions in Sub-problems To simplify the notations, we express Eq. (7) as: Pbl pl | AQ Bl UOl pl Al Al ) Bl UOl pl , (12) where Al and Bl are corresponding constants in Eq. (7) and their values are determined by Ml; ) (.) is the cumulative distribution function of the standard normal distribution. The objective function in Eq. (10) can be expressed as: log fl p ¦ s k 1 log 1 Ak Ak ) Bk UOk pk . (13) As stated in [17], a function g(x) is log-concave if and only if for all x dom g x , g x g '' x d g ' x , where g¶x) and g¶¶x) are, respectively, the first and second derivatives of function g. If we define function gk as: gk pk 1 Pbk pk 1 Ak Ak ) Bk UOk pk , (15) which is non-negative. Its first derivatives gk¶is: g k ' pk Ak Bk UOk 2 pk Bk UOk pk Ak Bk UOk 2 2S pk B UO p k k k wL p, ȟ,X pk where I (.) is the probability density function (pdf) of the standard normal distribution. And its second derivative is: g k '' pk Ak Bk UOk 4 2S Bk UOk pk p 1.5 Bk UOk pk 0.5 pk* ,[k* ,X * sk Ak I Bk UOk pk* 2 1 Ak Ak ) Bk UOk Bk UOk p For convex optimization problems, if any point satisfies the KKT conditions, it is primal and dual optimal, with a zero duality gap [17]. which is non-positive for pk . Due to gk¶¶pk)gk(pk (gk¶pk))2, gk(pk) is log-concave. Also, Eq. (13) is nonnegative sum of convex functions, which is also convex [17]. Therefore, the optimization problem in Eq. (10) can be solved by classic convex optimization methods. Examples of log(gk(pk)) are plotted in Fig. 2. The above KKT conditions imply: sk Ak I X t Bk UOk pk* 2 1 Ak Ak ) Bk UOk Bk UOk p ·¸ p 0. (21) ¨ * sk Ak I Bk UOk pk Bk UOk ¨X 2 1 Ak Ak ) Bk UOk pk* There are two cases when Eq. (21) holds: 0 , which implies: X* t Ll p, ȟ,X l ¦ sk log 1 Ak Ak ) k 1 Bk U pk Ok § l ¦ [ k pk X ¨ ¦ pk 1¸ , k 1 ©k 1 where ȟ and X are Lagrange multipliers associated with the inequality constraints and equality constraint respectively. The Karush-Kuhn-Tucker (KKT) conditions can be written for each value of k «l as: Primal feasible: pk* t 0; Dual feasible: [ p Complementary slackness: Gradient of Lagrangian vanishes: Bk UOk 2 1 Ak Ak ) 0 0 sk Ak I Bk UOk pk* 2 1 Ak Ak ) Case 1 is trivial since when pk* 0 , the video layers higher than k are not allowed to be transmitted, and it is equivalent to solving the (k-1)th sub-problem. Therefore, we only consider Case 2 specified in Eq. (23). And for different X * , any solution p*l satisfying Eq. (23) and power constraint k 1 1 is an optimal point of the l sub-problem in Eq. C. Proposed Algorithm Based on Eq. (23), we define the function hk as: Bk UOk Bk UOk pk* hk pk* log 2 2S 1 Ak Ak ) [k* t 0 . pk* ! 0 , which implies: Fig. 2. Examples of log(gk(pk)) when ʄk=1 and ʌ=1. B. Conditions of Optimal Solutions in Sub-problems The Lagrangian of the lth sub-problem, in Eq. (10), can be derived as: sk Ak I 0 log sk Ak Bk UOk Bk UOk pk* p (24) Bk UOk pk* The proposed bisection search algorithm is shown in Fig. 3, which is to find the optimal point p*l of the lth sub-problem. upper= min( hk (1) ), for k «l lower= max(hk (ǻIRUk «l 1 ! ' ) if ( ¦ hk1 P ; k 1 pk* 1 ) lower= ȝ; upper= ȝ; end if 11. end while Fig. 3. Proposed Bisection Search Algorithm for the lth Sub-problem. implementation. The overall optimization problem in Eq. (9) is solved by the proposed algorithm shown in Fig. 4. for l=1:L Obtain p*l by solving the lth sub-problem if (Ul Umax) k 1 Figure 5 illustrates a snapshot of the system utilities calculated by the objective function in Eq. (9). The optimal curve, obtained by exhaustive searches, is included for comparison. QPSK modulation is adopted for all the video layers. The average SNR is set as 13dB. There are 30 simulation results included with different channel matrices. It is clear that our proposed algorithm is very close to the optimal solutions. Even though WF algorithm is optimal in terms of PHY layer capacities, it is no longer optimal in APP layer utilities. M-WF is better than WF since UEP scheme on base layer and enhancement layers is applied. But it is still far from optimal. f k p*l uk ; layers of City are u=[0.5459, 0.2749, 0.0826, 0.0966] with an empirically chosen c=0.13 [15]. The encoded network abstraction layer unit (NALU) is fragmented by link layer with packet size as 48 bytes [8] and then transmitted through PHY layer. A 4×4 MIMO-SM system is applied with 100 KHz bandwidth. The CSI is fed back every channel coherent time, which is assumed to be 1ms. At the receiver side, we assume the packets containing control messages such as video coding parameters are correctly received. Also, perfect error detection scheme is assumed so that bit errors at the receiver side can be detected. The undecodable NALUs, including erroneous bits caused by channel degradation or unsatisfied dependencies, are discarded before passing through the SVC decoder. We compare our proposed scheme with traditional WF algorithm, M-WF algorithm in [10] and the simple equal power allocation scheme in the simulations. p*l ; end if end for Fig. 4. Proposed Algorithm for the Optimization Problem in Eq. (9). In this section, the effectiveness and the near-optimality of our proposed algorithm are evaluated through plenty of simulations. Video clips Foreman and City with resolution of 352×288 are encoded by the JSVM (Joint Scalable Video Model) version 9.19 [18]. Frame rates are both set as 30fps. GOP sizes are 8 with frame the pattern: IBBBBBBB. There are 161 frames encoded in total so that 20 GOP groups with one additional I frame are included. Three additional quality enhancement layers are encoded with medium-grain scalability (MGS). The basis quantization parameters of the four layers (i.e., one base layer and three enhancement layers) are set as QP=[45, 38, 35, 30] and the corresponding uniform quantization stepsizes can be calculated by q=2(QP4)/6 [15]. Based on Eq. (1), the utilities of the four layers of Foreman are u=[0.5719, 0.2614, 0.0772, 0.0895] with an empirically chosen c=0.12 [15]. The utilities of the four Fig. 5. Snapshot of System Utilities (QPSK, SNR: 13dB) The successfully received NALUs by the four schemes, with the same seeds for random number generations of wireless channel environments, are fed into SVC decoder to reconstruct the videos. Figure 6 shows the PSNRs of reconstructed Foreman clip when the average SNR is 18dB and QPSK modulation is used for all the video layers. It is clear that our proposed method outperforms the other three, even though our optimization objective function is not PSNR. This is due to the fact that by applying our proposed scheme with reasonable utility functions, more video frames with higher quality layers are received. Since NALU sizes are included in our objective function, unequal error protection (UEP) capability on lower layers of large NALUs, such as I-frames, is naturally inherent in our scheme. Moreover, the better receiving of I-frames also leads to higher PSNR of successive B-frames in the same GOP. Note that M-WF algorithm is not necessarily a good choice when transmitting NALUs with small sizes (i.e., B frames). This is due to the fact that over-protection of base layer may lead to waste of power. terms of utility. Moreover, by applying our proposed scheme with real-world SVC video traces, users can receive more error-free video frames with higher quality layers, which lead to better PSNR or QoE for the reconstructed videos. Similar results of City are plotted in Fig. 7. Here, the system average SNR is set as 20dB. The modulation schemes are QPSK, 16-QAM, 16-QAM, 64-QAM for video layer 1, 2, 3, and 4 respectively. Clearly, our proposed scheme has higher PSNR than that of the other three schemes. Since the BER of different modulation schemes are part of our objective function, our proposed scheme has much more obvious advantage comparing with the other Fig. 6. PSNR of Reconstructed Video (Foreman, QPSK, SNR: 18dB) Fig. 7. PSNR of Reconstructed Video (City, l1: QPSK, l2: 16-QAM, l3: 16QAM, l4: 64-QAM, SNR: 20dB) In this paper, we have proposed a near-optimal QoEdriven power allocation scheme for SVC-based video transmissions over a MIMO-SM system. Detailed algorithms are described with theoretical reasoning. Simulation results demonstrate that our proposed scheme is near optimal in ' 6RQJ DQG & : &KHQ ³0D[LPXP-throughput delivery of SVCbased video over MIMO systems with time-varying channel FDSDFLW\´ -RXUQDO RI 9LVXDO &RPPXQLFDWLRQ DQG ,PDJH Representation, vol. 19, no. 8, pp. 520±528, Dec. 2008. J.-N. Hwang, Multimedia networking: from theory to practice. Cambridge University Press, 2009. W. Ajib and D. HaccouQ³$QRYHUYLHZRIVFKHGXOLQJDOJRULWKPVLQ MIMO-based fourth-JHQHUDWLRQ ZLUHOHVV V\VWHPV´ IEEE Network, vol. 19, no. 5, pp. 43±48, 2005. X. Chen, J.-N. Hwang, P.-H. Wu, H.-J. Su, and C.-N. /HH³$GDSWLYH Mode and Modulation Coding Switching Scheme in MIMO 0XOWLFDVWLQJ6\VWHP´LQProc. of IEEE International Symposium on Circuits and Systems (ISCAS), 2013, pp. 441±444. 7 :LHJDQG / 1REOHW DQG ) 5RYDWL ³6FDODEOH YLGHR FRGLQJ IRU ,379 VHUYLFHV´ IEEE Transactions on Broadcasting, vol. 55, no. 2, pp. 527±538, 2009. + 6FKZDU] ' 0DUSH DQG 7 :LHJDQG ³2YHUYLHZ RI WKH 6FDODEOH 9LGHR &RGLQJ ([WHQVLRQ RI WKH +$9& 6WDQGDUG´ IEEE Transactions on Circuits and Systems for Video Technology, vol. 17, no. 9, pp. 1103±1120, Sep. 2007. D. Song and C. W. Chen, ³6FDODEOH+$9&YLGHRWUDQVPLVVLRQ over MIMO wireless systems with adaptive channel selection based RQ SDUWLDO FKDQQHO LQIRUPDWLRQ´ IEEE Transactions on Circuits and Systems for Video Technology, vol. 17, no. 9, pp. 1218±1226, 2007. C.-H. Chen, W.-H. Chung, and Y.-&:DQJ³&URVV-Layer Design for 9LGHR 6WUHDPLQJ ZLWK '\QDPLF $QWHQQD 6HOHFWLRQ´ LQ IEEE International Conference on Image Processing, pp. 3245±3248, 2011. C. Oestges and B. Clerckx, MIMO Wireless Communications From Real-World Propagation to Space-Time Code Design, 1st ed. Academic Press, 2007. 4 /LX 6 /LX DQG & : &KHQ ³$ QRYHO SULRULWL]HG VSDWLDO multiplexing for MIMO wireless system with application to H.264 69& YLGHR´ LQ Proc. of IEEE International Conference on Multimedia and Expo (ICME), pp. 968-973, 2010. S. Chikkerur, V. Sundaram, M. Reisslein, and L. J. Karam, ³2EMHFWLYH 9LGHR 4XDOLW\ $VVHVVPHQW 0HWKRGVௗ 3HUIRUPDQFH &RPSDULVRQ´IEEE Transactions on Broadcasting, vol. 57, no. 2, pp. 165±182, 2011. P.-H. Wu anG < + +X ³2SWLPDO /D\HUHG 9LGHR ,379 0XOWLFDVW 6WUHDPLQJ 2YHU 0RELOH :L0$; 6\VWHPV´ IEEE Transactions on Multimedia, vol. 13, no. 6, pp. 1395±1403, 2011. 2)'06\VWHPV´Wireless Personal Communications, Jul. 2012. C.-W. Huang, S.-M. Huang, P.-H. Wu, S.-J. Lin, and J.-N. Hwang, ³2/0 2SSRUWXQLVWLF /D\HUHG 0XOWLFDVWLQJ IRU 6FDODEOH ,379 RYHU 0RELOH:L0$;´IEEE Transactions on Mobile Computing, vol. 11, no. 3, pp. 453±463, Mar. 2012. Z. Ma, M. Xu, Y.-) 2X DQG < :DQJ ³0RGHOLQJ RI UDWH DQG perceptual quality of compressed video as functions of frame rate and TXDQWL]DWLRQ VWHSVL]H DQG LWV DSSOLFDWLRQV´ IEEE Transactions on Circuit and Systems for Video Technology, vol. 22, no. 5, pp. 671± 682, 2012. B. Sklar, Digital Communications - Fundamentals and Applications, 2nd ed. Prentice Hall, 2001. S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004.
{"url":"https://studylib.net/doc/18845686/a-near-optimal-qoe-driven-power-allocation-scheme-for-svc","timestamp":"2024-11-08T02:55:47Z","content_type":"text/html","content_length":"78467","record_id":"<urn:uuid:4f473cef-ff6b-4f9f-886d-672c7ce21bdc>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00484.warc.gz"}
Ch. 11 Homework - Statistics | OpenStax 11.1 Facts About the Chi-Square Distribution Decide whether the following statements are true or false. As the number of degrees of freedom increases, the graph of the chi-square distribution looks more and more symmetrical. The standard deviation of the chi-square distribution is twice the mean. The mean and the median of the chi-square distribution are the same if df = 24. 11.2 Goodness-of-Fit Test For each problem, use a solution sheet to solve the hypothesis test problem. Go to Appendix E Solution Sheets for the chi-square solution sheet. Round expected frequency to two decimal places. A six-sided die is rolled 120 times. Fill in the expected frequency column. Then, conduct a hypothesis test to determine if the die is fair. The data in Table 11.34 are the result of the 120 rolls. Face Value Frequency Expected Frequency The marital status distribution of the U.S. male population, ages 15 and older, is as shown in Table 11.35. Marital Status % Expected Frequency Never Married 31.3% Married 56.1% Widowed 2.5% Divorced/Separated 10.1% Suppose that a random sample of 400 U.S. males, 18 to 24 years old, yielded the following frequency distribution. We are interested in whether this age group of males fits the distribution of the U.S. adult population. Calculate the frequency one would expect when surveying 400 people. Fill in Table 11.35, rounding to two decimal places. Marital Status Frequency Never Married 140 Married 238 Widowed 2 Divorced/Separated 20 Use the following information to answer the next two exercises. The columns in Table 11.37 contain the Race/Ethnicity of U.S. Public Schools for a recent year, the percentages for the Advanced Placement Examinee Population for that class, and the Overall Student Population. Suppose the right column contains the results of a survey of 1,000 local students from that year who took an AP exam. Race/Ethnicity AP Examinee Population Overall Student Population Survey Frequency Asian, Asian American, or Pacific Islander 10.2% 5.4% 113 Black or African American 8.2% 14.5% 94 Hispanic or Latino 15.5% 15.9% 136 American Indian or Alaska Native 0.6% 1.2% 10 White 59.4% 61.6% 604 Not Reported/Other 6.1% 1.4% 43 Perform a goodness-of-fit test to determine whether the local results follow the distribution of the U.S. overall student population based on ethnicity. Perform a goodness-of-fit test to determine whether the local results follow the distribution of U.S. AP examinee population, based on ethnicity. The city of South Lake Tahoe, California, has an Asian population of 1,419 out of a total population of 23,609. Suppose that a survey of 1,419 self-reported Asians in the borough of Manhattan in the New York City area yielded the data in Table 11.38. Conduct a goodness-of-fit test to determine if the self-reported subgroups of Asians in Manhattan fit that of the South Lake Tahoe area. Race South Lake Tahoe Frequency Manhattan Frequency Asian Indian 131 174 Chinese 118 557 Filipino 1,045 518 Japanese 80 54 Korean 12 29 Vietnamese 9 21 Other 24 66 Use the following information to answer the next two exercises. UCLA conducted a survey of more than 263,000 college freshmen from 385 colleges in fall 2005. The results of students’ expected majors by gender were reported in The Chronicle of Higher Education (2/2/2006). Suppose a survey of 5,000 graduating females and 5,000 graduating males was done as a follow-up last year to determine what their actual majors were. The results are shown in the tables for Exercise 11.77 and Exercise 11.78. The second column in each table does not add to 100 percent because of rounding. Conduct a goodness-of-fit test to determine if the actual college majors of graduating females fit the distribution of their expected majors. Major Females—Expected Major Females—Actual Major Arts & Humanities 14% 670 Biological Sciences 8.4% 410 Business 13.1% 685 Education 13% 650 Engineering 2.6% 145 Physical Sciences 2.6% 125 Professional 18.9% 975 Social Sciences 13% 605 Technical 0.4% 15 Other 5.8% 300 Undecided 8% 420 Conduct a goodness-of-fit test to determine if the actual college majors of graduating males fit the distribution of their expected majors. Major Males—Expected Major Males—Actual Major Arts & Humanities 11% 600 Biological Sciences 6.7% 330 Business 22.7% 1,130 Education 5.8% 305 Engineering 15.6% 800 Physical Sciences 3.6% 175 Professional 9.3% 460 Social Sciences 7.6% 370 Technical 1.8% 90 Other 8.2% 400 Undecided 6.6% 340 Read the statement and decide whether it is true or false. In a goodness-of-fit test, the expected values are the values we would expect if the null hypothesis were true. In general, if the observed values and expected values of a goodness-of-fit test are not close together, then the test statistic can get very large and on a graph will be way out in the right tail. Use a goodness-of-fit test to determine if high school principals believe that students are absent equally during the week. The test to use to determine if a six-sided die is fair is a goodness-of-fit test. In a goodness-of-fit test, if the p-value is 0.0113, in general, do not reject the null hypothesis. A sample of 212 commercial businesses was surveyed for recycling one commodity; a commodity here means any one type of recyclable material such as plastic or aluminum. Table 11.41 shows the business categories in the survey, the sample size of each category, and the number of businesses in each category that recycle one commodity. Based on the study, on average half of the businesses were expected to be recycling one commodity. As a result, the last column shows the expected number of businesses in each category that recycle one commodity. At the 5 percent significance level, perform a hypothesis test to determine if the observed number of businesses that recycle one commodity follows the uniform distribution of the expected values. Business Type Number in Class Observed Number that Recycle One Commodity Expected Number that Recycle One Commodity Office 35 19 17.5 Retail/Wholesale 48 27 24 Food/Restaurants 53 35 26.5 Manufacturing/Medical 52 21 26 Hotel/Mixed 24 9 12 Table 11.42 contains information from a survey of 499 participants classified according to their age groups. The second column shows the percentage of obese people per age class among the study participants. The last column comes from a different study at the national level that shows the corresponding percentages of obese people in the same age classes in the United States. Perform a hypothesis test at the 5 percent significance level to determine whether the survey participants are a representative sample of the USA obese population. Age Class (years) Obese (Percentage) Expected USA Average (Percentage) 20–30 75 32.6 31–40 26.5 32.6 41–50 13.6 36.6 51–60 21.9 36.6 61–70 21 39.7 11.3 Test of Independence For each problem, use a solution sheet to solve the hypothesis test problem. Go to Appendix E for the chi-square solution sheet. Round expected frequency to two decimal places. A recent debate about where in the U.S. skiers believe the skiing is best prompted the following survey. Test to see if the best ski area is independent of the level of the skier. U.S. Ski Area Beginner Intermediate Advanced Tahoe 20 30 40 Utah 10 30 60 Colorado 10 40 50 Car manufacturers are interested in whether there is a relationship between the size of car an individual drives and the number of people in the driver’s family—that is, whether car size and family size are independent. To test this, suppose that 800 car owners were randomly surveyed with the results in Table 11.44. Conduct a test of independence. Family Size Sub & Compact Mid-Size Full-Size Van & Truck 3–4 20 50 100 90 5+ 20 30 70 70 College students may be interested in whether their majors have any effect on starting salaries after graduation. Suppose that 300 recent graduates were surveyed as to their majors in college and their starting salaries after graduation. Table 11.45 shows the data. Conduct a test of independence. Major < $50,000 $50,000–$68,999 $69,000 + English 5 20 5 Engineering 10 30 60 Nursing 10 15 15 Business 10 20 30 Psychology 20 30 20 Some travel agents claim that honeymoon hotspots vary according to age of the bride. Suppose that 280 recent brides were interviewed as to where they spent their honeymoons. The information is given in Table 11.46. Conduct a test of independence. Location 20–29 30–39 40–49 50+ Niagara Falls 15 25 25 20 Poconos 15 25 25 10 Europe 10 25 15 5 Virgin Islands 20 25 15 5 A manager of a sports club keeps information concerning the main sport in which members participate and their ages. To test whether there is a relationship between the age of a member and his or her choice of sport, 643 members of the sports club are randomly selected. Conduct a test of independence. Sport 18–25 26–30 31–40 41+ Racquetball 42 58 30 46 Tennis 58 76 38 65 Swimming 72 60 65 33 A major food manufacturer is concerned that the sales for its skinny french fries have been decreasing. As a part of a feasibility study, the company conducts research into the types of fries sold across the country to determine if the type of fries sold is independent of the area of the country. The results of the study are shown in Table 11.48. Conduct a test of independence. Type of Fries Northeast South Central West Skinny Fries 70 50 20 25 Curly Fries 100 60 15 30 Steak Fries 20 40 10 10 According to Dan Leonard, an independent insurance agent in the Buffalo, New York area, the following is a breakdown of the amount of life insurance purchased by males in the following age groups. He is interested in whether the age of the male and the amount of life insurance purchased are independent events. Conduct a test for independence. Age of Males None < $200,000 $200,000–$400,000 $401,001–$1,000,000 $1,000,001+ 20–29 40 15 40 0 5 30–39 35 5 20 20 10 40–49 20 0 30 0 30 50+ 40 30 15 15 10 Suppose that 600 thirty-year-olds were surveyed to determine whether there is a relationship between the level of education an individual has and salary. Conduct a test of independence. Annual Salary Not a High School Graduate High School Graduate College Graduate Masters or Doctorate < $30,000 15 25 10 5 $30,000–$40,000 20 40 70 30 $40,000–$50,000 10 20 40 55 $50,000–$60,000 5 10 20 60 $60,000+ 0 5 10 150 Read the statement and decide whether it is true or false. The number of degrees of freedom for a test of independence is equal to the sample size minus one. The test for independence uses tables of observed and expected data values. The test to use when determining if the college or university a student chooses to attend is related to his or her socioeconomic status is a test for independence. In a test of independence, the expected number is equal to the row total multiplied by the column total divided by the total surveyed. An ice cream maker performs a nationwide survey about favorite flavors of ice cream in different geographic areas of the United States. Based on Table 11.51, do the numbers suggest that geographic location is independent of favorite ice cream flavors? Test at the 5 percent significance level. U.S. Region/Flavor Strawberry Chocolate Vanilla Rocky Road Mint Chocolate Chip Pistachio Row Total West 12 21 22 19 15 8 97 Midwest 10 32 22 11 15 6 96 East 8 31 27 8 15 7 96 South 15 28 30 8 15 6 102 Column Total 45 112 101 46 60 27 391 Table 11.52 provides results of a recent survey of the youngest online entrepreneurs whose net worth is estimated at one million dollars or more. Their ages range from 17 to 30. Each cell in the table illustrates the number of entrepreneurs who correspond to the specific age group and their net worth. Are the ages and net worth independent? Perform a test of independence at the 5 percent significance level. Age Group/Net Worth Value (in millions of U.S. dollars) 1–5 6–24 ≥25 Row Total 17–25 8 7 5 20 26–30 6 5 9 20 Column Total 14 12 14 40 A 2013 poll in California surveyed people about a new tax. The results are presented in Table 11.53 and are classified by ethnic group and response type. Are the poll responses independent of the participants’ ethnic group? Conduct a test of independence at the 5 percent significance level. Opinion/Ethnicity Asian American White/Non-Hispanic African American Latino Row Total Against Tax 48 433 41 160 682 In Favor of Tax 54 234 24 147 459 No Opinion 16 43 16 19 94 Column Total 118 710 81 326 1,235 11.4 Test for Homogeneity For each word problem, use a solution sheet to solve the hypothesis test problem. Go to Appendix E Solution Sheets for the chi-square solution sheet. Round expected frequency to two decimal places. A psychologist is interested in testing whether there is a difference in the distribution of personality types for business majors and social science majors. The results of the study are shown in Table 11.54. Conduct a test of homogeneity. Test at a 5 percent level of significance. Open Conscientious Extrovert Agreeable Neurotic Business 41 52 46 61 58 Social Science 72 75 63 80 65 Do men and women select different breakfasts? The breakfasts ordered by randomly selected men and women at a popular breakfast place are shown in Table 11.55. Conduct a test for homogeneity at a 5 percent level of significance. French Toast Pancakes Waffles Omelettes Men 47 35 28 53 Women 65 59 55 60 A fisherman is interested in whether the distribution of fish caught in Green Valley Lake is the same as the distribution of fish caught in Echo Lake. Of the 191 randomly selected fish caught in Green Valley Lake, 105 were rainbow trout, 27 were other trout, 35 were bass, and 24 were catfish. Of the 293 randomly selected fish caught in Echo Lake, 115 were rainbow trout, 58 were other trout, 67 were bass, and 53 were catfish. Perform a test for homogeneity at a 5 percent level of significance. In 2007, the United States had 1.5 million homeschooled students, according to the U.S. National Center for Education Statistics. In Table 11.56, you can see that parents decide to homeschool their children for different reasons, and some reasons are ranked by parents as more important than others. According to the survey results shown in the table, is the distribution of applicable reasons the same as the distribution of the most important reason? Provide your assessment at the 5 percent significance level. Did you expect the result you obtained? Reasons for Homeschooling Applicable Reason (in thousands of respondents) Most Important Reason (in thousands of respondents) Row Total Concern About the Environment of Other Schools 1,321 309 1,630 Dissatisfaction with Academic Instruction at Other Schools 1,096 258 1,354 To Provide Religious or Moral Instruction 1,257 540 1,797 Child Has Special Needs, Other Than Physical or Mental 315 55 370 Nontraditional Approach to Child’s Education 984 99 1,083 Other Reasons (e.g., finances, travel, family time, etc.) 485 216 701 Column Total 5,458 1,477 6,935 When looking at energy consumption, we are often interested in detecting trends over time and how they correlate among different countries. The information in Table 11.57 shows the average energy use in units of kg of oil equivalent per capita in the United States and the joint European Union countries (EU) for the six-year period 2005 to 2010. Do the energy use values in these two areas come from the same distribution? Perform the analysis at the 5 percent significance level. Year European Union United States Row Total 2010 3,413 7,164 10,557 2009 3,302 7,057 10,359 2008 3,505 7,488 10,993 2007 3,537 7,758 11,295 2006 3,595 7,697 11,292 2005 3,613 7,847 11,460 Column Total 20,965 45,011 65,976 The Insurance Institute for Highway Safety collects safety information about all types of cars every year and publishes a report of top safety picks among all cars, makes, and models. Table 11.58 presents the number of top safety picks in six car categories for the two years 2009 and 2013. Analyze the table data to conclude whether the distribution of cars that earned the top safety picks safety award has remained the same between 2009 and 2013. Derive your results at the 5 percent significance level. Year/Car Type Small Mid-Size Large Small SUV Mid-Size SUV Large SUV Row Total Column Total 43 52 29 21 56 10 211 11.5 Comparison of the Chi-Square Tests For each word problem, use a solution sheet to solve the hypothesis test problem. Go to Appendix E Solution Sheets for the chi-square solution sheet. Round expected frequency to two decimal places. Is there a difference between the distribution of community college statistics students and the distribution of university statistics students in what technology they use on their homework? Of some randomly selected community college students, 43 used a computer, 102 used a calculator with built-in statistics functions, and 65 used a table from the textbook. Of some randomly selected university students, 28 used a computer, 33 used a calculator with built-in statistics functions, and 40 used a table from the textbook. Conduct an appropriate hypothesis test using a 0.05 level of Read the statement and decide whether it is true or false. If df = 2, the chi-square distribution has a shape that reminds us of the exponential. 11.6 Test of a Single Variance Use the following information to answer the next 12 exercises. Suppose an airline claims that its flights are consistently on time with an average delay of at most 15 minutes. It claims that the average delay is so consistent that the variance is no more than 150 minutes. Doubting the consistency part of the claim, a disgruntled traveler calculates the delays for his next 25 flights. The average delay for those 25 flights is 22 minutes with a standard deviation of 15 minutes. Is the traveler disputing the claim about the average or about the variance? A sample standard deviation of 15 minutes is the same as a sample variance of __________ minutes. Is this a right-tailed, left-tailed, or two-tailed test? chi-square test statistic = ________ Graph the situation. Label and scale the horizontal axis. Mark the mean and test statistic. Shade the p-value. Let α = 0.05 Decision: ________ Conclusion (write out in a complete sentence): ________ How did you know to test the variance instead of the mean? If an additional test were done on the claim of the average delay, which distribution would you use? If an additional test were done on the claim of the average delay, but 45 flights were surveyed, which distribution would you use? For each word problem, use a solution sheet to solve the hypothesis test problem. Go to Appendix E Solution Sheets for the chi-square solution sheet. Round expected frequency to two decimal places. A plant manager is concerned her equipment may need recalibrating. It seems that the actual weight of the 15-ounce cereal boxes it fills has been fluctuating. The standard deviation should be at most 0.5 ounces. To determine if the machine needs to be recalibrated, 84 randomly selected boxes of cereal from the next day’s production were weighed. The standard deviation of the 84 boxes was 0.54. Does the machine need to be recalibrated? Consumers may be interested in whether the cost of a particular calculator varies from store to store. Based on surveying 43 stores, which yielded a sample mean of $84 and a sample standard deviation of $12, test the claim that the standard deviation is greater than $15. Isabella, an accomplished Bay-to-Breakers runner, claims that the standard deviation for her time to run the 7.5 mile race is at most 3 minutes. To test her claim, Isabella looks up five of her race times. They are 55 minutes, 61 minutes, 58 minutes, 63 minutes, and 57 minutes. Airline companies are interested in the consistency of the number of babies on each flight so that they have adequate safety equipment. They are also interested in the variation of the number of babies. Suppose that an airline executive believes the average number of babies on flights is six with a variance of nine at most. The airline conducts a survey. The results of the 18 flights surveyed give a sample average of 6.4 with a sample standard deviation of 3.9. Conduct a hypothesis test of the airline executive’s belief. The number of births per woman in China is 1.6, down from 5.91 in 1966. This fertility rate has been attributed to the law passed in 1979 restricting births to one per woman. Suppose that a group of students studied whether the standard deviation of births per woman was greater than 0.75. They asked 50 women across China the number of births they had. The results are shown in Table 11.59. Does the students’ survey indicate that the standard deviation is greater than 0.75? # of Births Frequency According to an avid aquarist, the average number of fish in a 20-gallon tank is 10, with a standard deviation of two. His friend, also an aquarist, does not believe that the standard deviation is two. She counts the number of fish in 15 other 20-gallon tanks. Based on the results that follow, do you think that the standard deviation is different from two? Data: 11; 10; 9; 10; 10; 11; 11; 10; 12; 9; 7; 9; 11; 10; and 11. The manager of Frenchies is concerned that patrons are not consistently receiving the same amount of French fries with each order. The chef claims that the standard deviation for a 10-ounce order of fries is at most 1.5 ounces, but the manager thinks that it may be higher. He randomly weighs 49 orders of fries, which yields a mean of 11 ounces and a standard deviation of 2 ounces. You want to buy a specific computer. A sales representative of the manufacturer claims that retail stores sell this computer at an average price of $1,249 with a very narrow standard deviation of $25. You find a website that has a price comparison for the same computer at a series of stores as follows: $1,299; $1,229.99; $1,193.08; $1,279; $1,224.95; $1,229.99; $1,269.95; and $1,249. Can you argue that pricing has a larger standard deviation than claimed by the manufacturer? Use the 5 percent significance level. As a potential buyer, what would be the practical conclusion from your A company packages apples by weight. One of the weight grades is Class A apples. Class A apples have a mean weight of 150 grams, and there is a maximum allowed weight tolerance of 5 percent above or below the mean for apples in the same consumer package. A batch of apples is selected to be included in a Class A apple package. Given the following apple weights of the batch, does the fruit comply with the Class A grade weight tolerance requirements? Conduct an appropriate hypothesis test. (a) At the 5 percent significance level (b) At the 1 percent significance level Weights in selected apple batch (in grams): 158; 167; 149; 169; 164; 139; 154; 150; 157; 171; 152; 161; 141; 166; and 172.
{"url":"https://openstax.org/books/statistics/pages/11-homework","timestamp":"2024-11-08T14:45:25Z","content_type":"text/html","content_length":"442943","record_id":"<urn:uuid:ff6a15b2-0da9-4d08-8d11-512f6d30b7ce>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00722.warc.gz"}
Quadratic Equation - Formula, Examples | Quadratic Formula - [[company name]] [[target location]], [[stateabr]] Quadratic Equation Formula, Examples If you going to try to figure out quadratic equations, we are excited about your journey in math! This is indeed where the most interesting things begins! The data can look too much at start. Despite that, provide yourself a bit of grace and room so there’s no rush or strain when figuring out these questions. To be efficient at quadratic equations like an expert, you will need patience, understanding, and a sense of humor. Now, let’s start learning! What Is the Quadratic Equation? At its core, a quadratic equation is a mathematical formula that states distinct situations in which the rate of deviation is quadratic or proportional to the square of some variable. Though it may look like an abstract theory, it is simply an algebraic equation described like a linear equation. It usually has two solutions and uses complicated roots to solve them, one positive root and one negative, employing the quadratic equation. Working out both the roots will be equal to zero. Meaning of a Quadratic Equation First, remember that a quadratic expression is a polynomial equation that comprises of a quadratic function. It is a second-degree equation, and its usual form is: ax2 + bx + c Where “a,” “b,” and “c” are variables. We can use this equation to figure out x if we put these numbers into the quadratic formula! (We’ll subsequently check it.) All quadratic equations can be written like this, that makes solving them simply, relatively speaking. Example of a quadratic equation Let’s compare the given equation to the last formula: x2 + 5x + 6 = 0 As we can see, there are 2 variables and an independent term, and one of the variables is squared. Thus, compared to the quadratic formula, we can assuredly tell this is a quadratic equation. Generally, you can observe these types of equations when measuring a parabola, that is a U-shaped curve that can be plotted on an XY axis with the data that a quadratic equation offers us. Now that we know what quadratic equations are and what they look like, let’s move on to solving them. How to Figure out a Quadratic Equation Utilizing the Quadratic Formula Although quadratic equations might appear very complex when starting, they can be divided into few easy steps utilizing a straightforward formula. The formula for figuring out quadratic equations consists of setting the equal terms and applying basic algebraic functions like multiplication and division to achieve two answers. Once all functions have been executed, we can solve for the values of the variable. The answer take us another step nearer to find result to our original problem. Steps to Working on a Quadratic Equation Using the Quadratic Formula Let’s promptly plug in the general quadratic equation once more so we don’t overlook what it looks like ax2 + bx + c=0 Ahead of solving anything, bear in mind to detach the variables on one side of the equation. Here are the three steps to figuring out a quadratic equation. Step 1: Note the equation in conventional mode. If there are variables on both sides of the equation, sum all alike terms on one side, so the left-hand side of the equation totals to zero, just like the conventional model of a quadratic equation. Step 2: Factor the equation if workable The standard equation you will conclude with must be factored, usually using the perfect square method. If it isn’t possible, put the terms in the quadratic formula, which will be your best buddy for solving quadratic equations. The quadratic formula seems like this: Every terms correspond to the same terms in a conventional form of a quadratic equation. You’ll be using this significantly, so it is smart move to memorize it. Step 3: Implement the zero product rule and work out the linear equation to discard possibilities. Now once you possess two terms resulting in zero, work on them to get two solutions for x. We have 2 answers due to the fact that the answer for a square root can be both positive or negative. Example 1 2x2 + 4x - x2 = 5 Now, let’s piece down this equation. Primarily, simplify and put it in the conventional form. x2 + 4x - 5 = 0 Immediately, let's recognize the terms. If we contrast these to a standard quadratic equation, we will get the coefficients of x as ensuing: To work out quadratic equations, let's replace this into the quadratic formula and work out “+/-” to include both square root. We work on the second-degree equation to get: Next, let’s simplify the square root to obtain two linear equations and figure out: x=-4+62 x=-4-62 x = 1 x = -5 Now, you have your solution! You can review your solution by checking these terms with the first equation. 12 + (4*1) - 5 = 0 1 + 4 - 5 = 0 -52 + (4*-5) - 5 = 0 25 - 20 - 5 = 0 This is it! You've solved your first quadratic equation utilizing the quadratic formula! Congratulations! Example 2 Let's try another example. 3x2 + 13x = 10 First, place it in the standard form so it is equivalent 0. 3x2 + 13x - 10 = 0 To solve this, we will put in the values like this: a = 3 b = 13 c = -10 Work out x using the quadratic formula! Let’s clarify this as far as possible by figuring it out just like we did in the previous example. Solve all easy equations step by step. You can solve for x by taking the negative and positive square roots. x=-13+176 x=-13-176 x=46 x=-306 x=23 x=-5 Now, you have your result! You can check your work through substitution. 3*(2/3)2 + (13*2/3) - 10 = 0 4/3 + 26/3 - 10 = 0 30/3 - 10 = 0 10 - 10 = 0 3*-52 + (13*-5) - 10 = 0 75 - 65 - 10 =0 And that's it! You will figure out quadratic equations like a professional with some patience and practice! Granted this synopsis of quadratic equations and their basic formula, students can now take on this difficult topic with assurance. By starting with this easy explanation, learners secure a solid grasp before moving on to further intricate theories down in their academics. Grade Potential Can Assist You with the Quadratic Equation If you are struggling to understand these ideas, you may require a math teacher to guide you. It is best to ask for help before you lag behind. With Grade Potential, you can learn all the tips and tricks to ace your subsequent math exam. Grow into a confident quadratic equation problem solver so you are ready for the following intricate ideas in your mathematics studies.
{"url":"https://www.philadelphiainhometutors.com/blog/quadratic-equation-formula-examples-quadratic-formula","timestamp":"2024-11-05T22:45:02Z","content_type":"text/html","content_length":"78951","record_id":"<urn:uuid:c71f42d5-9bbb-45f5-92f3-8253472a303c>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00590.warc.gz"}
DSA - Simple Search vs Binary Search Interview Skills Series I’ve been doing a lot of interview prep and studying for technical interviews. And the one topic that always has myself and thousands of other engineers depressed is data structures and algorithms, aka DSA. For some reason companies like to think that DSA and leetcode is the only way to tell someone is good at programming. And those companies are simultaneously correct and incorrect. There are plenty of examples of folks who get hired into FAANG and ace the interview process because they have memorized DSA and a thousand leetcode questions. But once they get on the job they can’t build anything resilient or performant. Likewise, there are people like me, who are not great at leetcode questions. But once I’m on the job I understand the business and scalability needs and build really strong systems that perform well under load. So, I’ve decided enough is enough and I’m cracking out my older copy of The Algorithm Design Manual I bought back in college, and I’m going to strengthen and maintain my interviewing skills. I will be tagging posts with #interviewing if you care to search for them on my blog. To start off this series I’m going to go over two very simple search algorithms, and when you would use them in the real world. Simple Search and Binary Search. Simple Search Now, there isn’t really an algorithm called simple search, but I can guarantee you’ve all use this one. If you have an array of data such as [1,2,3,4,5,6,7,8,9]. Simple search would iterate over every index of the array and comparing the value to what you’re looking for. For simplicity, and to follow general conventions, I’m going to call the array haystack and the target value needle. Find the needle in the haystack. def simple_search(needle, haystack) haystack.each.with_index do |val, idx| return idx if val == needle haystack = [1, 2, 3, 4, 5, 6, 7, 8, 9] simple_search(8, haystack) => 7 This returns the index of the array where the needle is found. This looks fairly innocent for a small dataset like this, but once you scale this to potentially millions of values it slows down considerably. Using Big Oh notation this is O(n), linear. As the number of values in the array increase, the time required to complete is increased as well. Binary Search Enter Binary Search. Now, there is a caveat to using binary search. The array must already be sorted. Say we have that same array [1,2,3,4,5,6,7,8,9] and we’re looking for the index of 8. You could iterate over the entire array to get index 7. Or you can use binary search. def binary_search(low, high, needle, haystack) return -1 if low > high middle = (low+high)/2 return middle if haystack[middle] == needle if haystack[middle] > needle return binary_search(low, middle-1, needle, haystack) return binary_search(middle+1, high, needle, haystack) haystack = [1, 2, 3, 4, 5, 6, 7, 8, 9] binary_search(0, haystack.length, 8, haystack) => 7 Walking through this code, we have an array of length 9. So we call binary search against a low of 0, and a high of 9, searching for 8 in the haystack. (0+9)/2 = 4.5, ruby rounds this down to 4. The value at index 4 is 5 5 does not equal 8 so we continue down. 5 is less than 8 so we enter the else condition and pass the middle index + 1 to set our low index to 5 (the next index after 4). (5+9)/2 = 7. The value at index 7 is 8. We’ve found our value and return 7. Whats the difference? These both found the same answer, index 7. However, they did it in a different number of steps. Simple Search took 8 steps to find index 7. Binary Search took 2. Binary search has a run time complexity of O(log n). Admittedly I’m not great at math, so I’m not going to pretend I can explain logarithms. But the general idea that makes an algorithm logarithmic is dividing the set you loop over in half each iteration. I do this in the calculation of middle (low+high)/2 and then passing middle+1 or middle-1 in the recursive calls to change the low or high respectively. When would you use Binary Search? In any dataset that is sorted. Page numbers, a word dictionary, phone book, etc.. E.G. If you open a (physical book) dictionary looking for the word map, and you open to the B’s, you know M comes after B so you keep your finger in the B section and flip further back. You are now at the Q’s and have gone too far. so you split the difference between B and Q, and repeat this until you find the M’s and eventually the word map. If you were using simple search, you would scan every single word from A onwards until you found map. Not very efficient eh? This was a rather simple algorithm to digest. I obviously already knew what binary search was, but it was nice to refresh memory on it. I will get into more complex algorithms as we go along. I also plan on covering common Design Patterns in software engineering.
{"url":"https://www.jeremywinterberg.com/p/dsa-simple-search-vs-binary-search","timestamp":"2024-11-03T12:59:36Z","content_type":"text/html","content_length":"166246","record_id":"<urn:uuid:cdde8f5b-4308-45d1-b18a-dd5a5d5d3dcd>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00574.warc.gz"}
Stacks Project Blog Consider the topology τ on the category of schemes where a covering is a finite family of proper morphisms which are jointly surjective. (Dear reader: does this topology have a name?) For the purpose of this post proper hypercoverings will be τ-hypercoverings as defined in the chapter on hypercoverings. Proper hypercoverings are discussed specifically in Brian Conrad’s write up. In this post I wanted to explain an example which I was recently discussing with Bhargav on email. I’d love to hear about other “explicit” examples that you know about; please leave a comment. The example is an example of proper hypercovering for curves. Namely, consider a separable degree 2 map X —> Y of projective nonsingular curves over an algebraically closed field and let y be a ramification point. The simplicial scheme X_* with X_i = normalization of (i + 1)st fibre product of X over Y is NOT a proper hypercovering of Y. Namely, consider the fibre above y (recall that the base change of a proper hypercovering is a proper hypercovering). Then we see that X_0 has one point above y, X_1 has 2 points above y, and X_2 has 4 points above y. But if X_2 is supposed to surject onto the degree 2 part of cosk_1(X_1 => X_0) then the fibre of X_2 over y has to have at least 8 points!!!! Namely cosk_1(S —> *) where S is a set and * is a singleton set is the simplicial set with S^3 in degree 2, S^6 in degree 3, etc because an n-simplex should exist for any collection of (n + 1 choose 2) 1-simplices since each of the 1-simplices bounds the unique 0-simplex on both sides, see for example Remark 0189. So I think that to construct the proper hypercovering we have to throw in some extra points in simplicial degree 2 which sort of glue the two components of X_1. Now, as X_* does work over the complement of the ramification locus in Y, I think you can argue that it really does suffice to add finite sets of points to X_* (over ramification points) to get a proper hypercovering! PS: Proper hypercoverings are interesting since they can be used to express the cohomology of a (singular) variety in terms of cohomologies of smooth varieties. But that’s for another post. Theorem. Let f : X —> Y be a proper morphism of varieties and let y ∈ Y with f^{-1}(y) finite. Then there exists a neighborhood V of y in Y such that f^{-1}(V) —> V is finite. If X is quasi-projective, then there is a simple proof: Choose an affine open U of X containing f^{-1}(y); this uses X quasi-projective. Using properness of f, find an affine open V ⊂ Y such that f^ {-1}V ⊂ U. Then f^{-1}V = V x_Y U is affine as Y is separated. Hence f^{-1}V —> V is a proper morphism of affines varieties. Such a morphism is finite, see Lemma Tag 01WM for an elementary argument. I do not know a truly simple proof for the general case. (Ravi explained a proof to me that avoids most cohomological machinery, but unfortunately I forgot what the exact method was; it may even be one of the arguments I list below.) Here are some different approaches. (A) One can give a proof using cohomology and the theorem on formal functions, see Lemma Tag 020H. Let ZMT be Grothendieck’s algebraic version of Zariski’s main theorem, see Theorem Tag 00Q9. (B) One can prove the result using ZMT and etale localization. Namely, one proves that given any finite type morphism X —> Y with finite fibre over y, there is after etale localization on Y, a decomposition X = U &coprod; W with U finite over Y and the fibre W_y empty (see Section Tag 04HF). In the proper case it follows that W is empty after shrinking Y. Finally, etale descent of the property “being finite” finishes the argument. This method proves a general version of the result, see Lemma Tag 02LS. (C) A mixture of the above two arguments using ZMT and a characterization of affines: 1. Show that after replacing Y by a neighborhood of y we may assume that all fibers of f are finite. This requires showing that dimensions of fibres go up under specialization. You can prove this using generic flatness and the dimension formula (as in Eisenbud for example) or using ZMT. 2. Let X’ —> Y be the normalization of Y in the function field of X. Then X’ —> Y is finite and X’ and X are birational over Y. Finiteness of X’ over Y requires finiteness of integral closure of finite type domains over fields, which follows from Noether normalization + epsilon. 3. Let W ⊂ X x_Y X’ be the closure of the graph of the birational rational map from X to X’. Then W —> X is finite and birational and W —> X’ is proper with finite fibres and birational. 4. Using ZMT one shows that W —> X’ is an isomorphism. Namely, a corollary of ZMT is that separated quasi-finite birational morphisms towards normal varieties are open immersions. 5. Now we have X’ —> X —> Y with the first arrow finite birational and the composition finite too. After shrinking Y we may assume Y and X’ are affine. If X is affine, then we win as O(X) would be a subalgebra ofa finite O(Y)-algebra. 6. Show that X is affine because it is the target of a finite surjective morphism from an affine. Usually one proves this using cohomology. The Noetherian case is Lemma Tag 01YQ (this uses less of the cohomological machinery but still uses the devissage of coherent modules on Noetherian schemes). In fact, the target of a surjective integral morphism from an affine is affine, see Lemma Tag Cocontinuous functors This post is another attempt to explain how incredibly useful the notion of a cocontinuous functor of sites really is. I already tried once here. Let u : C —> D be a functor between sites. We say u is cocontinuous if for every object U of C and every covering {V_j —> u(U)} in D there exists a covering {U_i —> U} in C such that {u(U_i) —> U} refines {V_j —> u(U)}. This is the direct translation of SGA 4, II, Defintion 2.1 into the language of sites as used in the stacks project and in Artin’s notes on Grothendieck topologies. Note that we do not require that u transforms coverings into coverings, i.e., we do not assume u is continuous. Often the condition of cocontinuity is trivial to check. Lemma Tag 00XO A cocontinuous functor defines a morphism of topoi g : Sh(C) —> Sh(D) such that g^{-1}G is the sheaf associated to U |—> G(u(U)). The reader should contrast this with the “default” which is morphisms of topoi associated to continuous functors (where one has to check the exactness of the pull back functor explicitly in each case!). Let’s discuss some examples where the lemma applies. The standard example is the functor Sch/X —> Sch/Y associated to a morphism of schemes X —> Y for any of the topologies Zariski, etale, smooth, syntomic, fppf. This defines functoriality for the big topoi. This also works to give functoriality for big topoi of algebraic spaces and algebraic stacks. In exactly the same way we get functoriality of the big crystalline topoi. Another example is any functor u : C —> D between categories endowed with the chaotic topology, i.e., such that sheaves = presheaves. Then u is cocontinuous and we get a morphism of topoi Sh(C) —> Sh Finally, an important example is localization. Let C be a site and let K be a sheaf of sets. Let C/K be the category of pairs (U, s) where U is an object of C and s ∈ K(U). Endow C/K with the induced topology, i.e., such that {(U_i, s_i) —> (U, s)} is a covering in C/K if and only if {U_i —> U} is a covering in C. Then C/K —> C is cocontinuous (and continuous too) and we obtain a morphism of topoi Sh(C/K) —> Sh(C) whose pullback functor is restriction. What I am absolutely not saying is that the lemma above is a “great” result. What I am saying is that, in algebraic geometry, the lemma is easy to use (no additional conditions to check) and situations where it applies come up frequently and naturally. PS: Warning: In some references a cocontinuous functor is a functor between categories (not sites) is defined as a functor that commutes with colimits. This is a different notion. Too bad!
{"url":"https://www.math.columbia.edu/~dejong/wordpress/?m=201202","timestamp":"2024-11-05T22:18:38Z","content_type":"text/html","content_length":"52033","record_id":"<urn:uuid:b05dd1f6-c7f8-474d-b97f-105562ba4905>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00586.warc.gz"}
Add 2 and 3-digit numbers by redistributing | Oak National Academy Hello, my name is Mr. Tasuman, and I'm really excited to be working If you're ready, let's get started. The outcome of today's lesson is for you to be able to say that you can add three-digit numbers by redistributing, and we're gonna use redistribution as a way of transforming our sums to make addition easier. Here are some really important key words that you're gonna need to be able to understand to be able to access the learning slides. I'm going to say them, and then you are going to repeat them back to me. I'll say my turn, say the word, and then I'll say your turn, and you can say them. My turn, redistribute. Your turn. My turn, inverse. Your turn. My turn, efficient. Your turn. Okay, let's have a look at what each of these words means, and you can try to understand them and use them as you're learning. Redistribution is a way of transforming a sum to make it easier to calculate mentally. The inverse is the opposite or reverse operation. For example, subtraction is the inverse operation of addition. Being efficient means finding a way to solve a problem quickly whilst also maintaining accuracy. Okay, here's an outline then of what we're going to do in the lesson today. We're adding two- and three-digit numbers by redistributing, and the first part of the lesson's going to be about efficient mental addition with redistribution. Then, we're gonna move on to redistributing with three-digit addends. Are you ready? Let's get started. Here's two friends that we'll meet in this lesson, Sophia and Andeep. Now, these two are going to help us with some discussions, and some answers, and some thoughts that they might have in response to some of the prompts that we'll see on the slides. They start by playing with some bucket scales. Andeep puts 29 blue cubes into one side of a bucket balance. Sophia adds 15 red cubes. Andeep puts 30 blue cubes into the other side of the bucket balance, and Sophia adds 14 red cubes. What do you notice? Well, Andeep helps us out here, and he says that he notices the bucket scale is balanced, so the number of cubes on each side must be equal, and Sophia also helps out because she says that both sides have an array of cubes with one missing, but the missing cube is a different colour in each bucket. We can write this set of bucket scales and the cubes within them as an equation. On one side, we have 29 plus 15. On the other side, we have 30 plus 14. The bucket scale is balanced, so we know that the totals are of equal value, so we put an equals sign in the middle. Which of these two expressions would be easiest to solve? Andeep says he would prefer to solve 29 plus 15 because the first addend is lower. Sophia says, "I would prefer to solve 30 plus 14 because it is easier to count on from multiples of 10." Who do you agree with and why? Have a think about it. We can use redistribution to transform addition calculations, making them easier to solve mentally. Base 10 can be used to represent redistribution. We've got 29 plus 15 equals, and we've modelled that using base 10. Sophia says, "This is the same as the bucket scales." She recognises that expression. She says, "I'm going to transform this addition using redistribution. First, I'll check which addend is closest to a boundary. 29 is 1 away from 30. Then, I will subtract 1 from the other addend and redistribute it." There it goes. She's taken 1 away from 15 to leave 14. She sends that 1 over to the first addend to turn it into 30. Now, she's got an easier addition. She says, "I will combine the two addends to find the sum. The sum is 44." Well done, Sophia. What do you notice about the total number of cubes? Sophia says, "I transformed the addends to create a calculation I was more comfortable with. The sum is the same because the total number of cubes is the same. I didn't lose any by redistributing." Jottings can be useful to help with redistribution. We've got the same sum on the right there with base-10 model underneath, and we're gonna go through the same process. We take away 1 from the second addend, and you can see it in the jottings there. We move it over to the first addend by adding on 1 there, and we replace that addend with 30. We combine them, and that gives us 44. Andeep says, "These jottings help me to calculate mentally. I might not need them all of the time." Okay, now it's your turn. We're gonna check your understanding. I'd like you to use redistribution to calculate this addition, 39 plus 27, and Andeep gives us a helpful tip there. He says, "Remember to start by finding the addend closest to a boundary." Pause the video here, have a go, and I'll be back in a little while to give you an answer so you can see how you got on. Good luck. Welcome back, let's see how you got on. Now, you might have spotted that 39 was actually the addend that was closest to a boundary, so we needed to start by taking 1 away from 27 in order to redistribute it over to the first addend, which would turn 39 into 40. 40 plus 26 is an easier calculation to do mentally. You should have got 66. Did you get it? I hope so. Okay, we're gonna move on. Ready? Let's go. Andeep wants to have a go at using redistribution. He uses digit cards to create the following sum, 52 plus 26. He says, "I'm gonna transform the calculation using redistribution so that it's easier to find the sum." He starts with 52, add 26 as a jotting, and this time, he says, "The closest addend is 52, which is 2 away from 50." So he subtracts 2 from 52. He redistributes that 2 onto the second addend, transforming it from 26 to 28. So now he's left with the sum 50 plus 28, which is 78. Okay, your turn to have another go. Use redistribution to calculate this addition, 32 plus 45. And Sophia reminds us, "Remember to start by finding the addend closest to a boundary." Pause the video here so you can have a go, and I'll be back in a moment to give you the answer. Good luck. Welcome back, let's see how you got on. You might have spotted that the addend that was closest to a boundary was 32, so we needed to subtract 2 from that addend, making it 30, and we needed to redistribute that 2 onto 45, creating 30 plus 47, again, a much easier sum to complete. You should have got 77. Well done if you did. Ready to move on? Let's go. We're gonna now have a go at doing some practise tasks. For Number 1A and B, you'll see there are some jottings there that have been partly completed. They have some missing numbers. You can see where there's underlines without a number above them. Your task in 1A and B will be to fill in those missing numbers. And for Number 2, you're going to use redistribution to calculate the sums you can see that have been drawn using digit cards. Pause the video here, have a go at the practise tasks, and then I'll be back in a little while to give you some feedback. Okay, it's time to give you some feedback. I'm gonna reveal the answers for 1A and B. You can see that, in 1A, you should have got 93, and in 1B you should have got 96. Pause the video here so you can mark them carefully if you need to. All right, let's move on to Number 2. The answers for Number 2A was 72, and B was 68. Again, pause the video here if you'd like to mark your answers carefully. Okay, well done. We've completed the first part of the lesson, and now we're going to move on to looking at redistributing but with three-digit addends this time. Let's get ready, and let's go for it. Andeep has another go at redistribution. He creates three-digit addends this time, 320 plus 190. What is different about the redistribution this time? Hmm. Well, Sophia says, "The addends are multiples of 10. Maybe it's best to look for the addend nearest to a hundred boundary." Andeep says, "Good thinking. That's 190, which is the second addend. I wonder if it will still work?" Andeep uses jottings just like before. "Okay, I'm still gonna transform the calculation using redistribution so that it is easier to find the sum," he says confidently. Let's see how he gets on. He sets it out using jottings to help, and he realises that 190 is the addend closest to a hundred boundary. It's 10 away, so he subtracts 10 from 320, and he redistributes the 10 onto 190, making 200. Now, his sum is 310 plus 200, much easier to calculate mentally, so he adds together the new redistributed addends to get 510. Top work, Andeep, well done. And he concludes that it doesn't matter which addend you use for redistribution, and Sophia says, "Also, we redistributed 10s instead of 1s this time, and that works just as well." Great, so it's your turn now. I'd like you to add together these three-digit numbers using redistribution, 440 plus 290. Andeep gives us a little clue here. He says, "Don't forget, look for the addend closest to a hundred boundary." Pause the video here, have a go, and I'll be back in a little while to give you the answer so you can see how you got on. Welcome back, let's see how you did. You've got 440 added to 290, and you might have spotted that the addend that was closest to a hundred boundary was 290, so we needed to start by subtracting 10 from 440 to make 430, redistributing it onto 290 to make 300. Now, we've transformed our sum, and we should get 730. Well done if you did. Okay, let's move on. Sophia changes one of the digit cards in Andeep's sum. You can see it there. Which addend should they redistribute? Have a look and have a think. Well, Sophia says, "Hmm, the addends are no longer both multiples of 10, but I still think we should look for the addend nearest to a hundred boundary." And Andeep says, "Yes, I agree. That's 196 this time. We will have to redistribute some 1s instead of 10s." Andeep uses jottings just like before. He wants to transform the calculation again using redistribution. So he starts out with writing out the sum. He realises that 196 is the addend closest to a hundred boundary. It's four away, so he decides he's going to redistribute using four. He subtracts 4 from 320, redistributes it over to the other addend to make 200. He's got a much easier sum now. He's transformed that addition to 316 plus 200. He adds together the new addends, and he gets 516. Well done, Andeep, brilliant. He says, "You can still redistribute using multiple 1s," and Sophia says, "Yes, and it it works with three-digit numbers. We can still transform the calculation." Okay, your turn again. I'd like you to add together these three-digit numbers using redistribution, 440 plus 296. And Andeep tells us, "Don't forget, look for the addend closest to a hundred boundary." Pause the video here, have a go, and I'll give you the answer in a little while. Welcome back, let's see how you got on. Did you write the addends out like this? You might well have done and seen that 296 was closest to a hundred boundary. It was 4 away, so you might have then taken 4 from the first addend and redistributed it to the second one, giving you a new calculation of 436 plus 300, much easier, and the final answer was 736. Did you get it? Well done if you did. Let's move on. Andeep has one last go at creating a new sum, and he creates 280 plus 450. Which addend should they redistribute? Hmm. Sophia says, "Well, they're both multiples of 10, which means that we are going to be redistributing 10s." Andeep says, "Yes, but this time, it will need to be more than one 10 because both addends are more than 10 away from a hundred boundary." Look at the numbers. You can see that neither of those addends are within 10 of a hundred boundary. So Sophia says, "280 is the closest. It's 20 away from 300." Andeep replies, "Okay, let's try redistributing with 20 then, see if it works." Great attitude, Andeep, okay, let's go. He uses jottings just like before, and he sets out to transform the calculation using redistribution. He knows 280 is the closest addend and that it's 20 away, so he decides to redistribute that one. He takes 20 from 450 and redistributes it over to 280, giving him a new calculation of 300 plus 430. He adds those together and gets 730, nice and simple. Well done, Andeep. Sophia says, "You can still redistribute using multiples of 10." Okay, it's your turn to have a go. I'd like you to add together these three-digit numbers using redistribution, 440 plus 280. Andeep says, "Don't forget, look for the addend closest to a hundred boundary." Pause the video here, have a go, and then we can see how you got on in a moment. Welcome back, let's see how you did. You may have spotted that 280 was the the addend that was closest to a hundred, so we needed to redistribute 20 from 440 and move it over to 280, giving us a new calculation of 420 plus 300, which was 720. Did you get that? I hope so. Okay, let's move on. Andeep and Sophia are making fruit drinks. They choose two fruit juice flavours and mix them together. Andeep says, "I'll pour in 305 millilitres of orange juice." There it goes. Sophia says, "I'll add 180 millilitres of pineapple juice." In it goes, it's mixed together. Andeep wants to pour the content into a drinks bottle but doesn't know if the capacity is great enough. He says, "This looks yummy, but my drinks bottle is only 500 millilitres. Will it have a big enough capacity?" Sophia says, "Let's work it out. If we add together the quantity of each juice, then we will know." Andeep and Sophia write this problem as an equation, 305 millilitres plus 180 millilitres. Andeep says, "Which addend is nearest a hundred boundary?" Sophia replies, "I think 305 because, even though it's greater than 300, it's only 5 away." Great reasoning. So Andeep says, "Let's take 5 away from 305 millilitres, leaving 300 millilitres," and then Sophia says, "Then, redistribute the 5 to make 185 millilitres." They've transformed the calculation, much They add together the redistributed addends to give them 485 millilitres And Andeep realises that will fit in the bottle because 485 millilitres is less than the 500 millilitres of his bottle's Great, pour away, Andeep. Okay, it's your turn, a similar kind of problem here, Sophia and Andeep make another fruit drink. We start with Andeep pouring in 204 millilitres of apple juice, and Sophia pours in 282 millilitres of mango juice. Will Sophia be able to pour all of it into her bottle with a capacity of 500 millilitres? I'm gonna ask you to pause the video here and have a go. I'll be back in a little while to see if you've got it right. Welcome back, let's see what you got. So we start by writing it out as an equation, 204 millilitres plus 282 millilitres. Then, we recognise that 204 is close to a hundred boundary, so we remove 4 from that and redistribute it over to the other addend, giving us 200 millilitres plus 286 millilitres. In total, that's 486 millilitres. Great news, yes, she can pour all the juice into a bottle because 486 millilitres is less than 500 millilitres. All right, it's time for you to have some practise. 1A and B are jottings that have been written out with missing numbers just like you did in Task A. Your job will be to try to write in those missing numbers. For 2A, B, C, and D, you're going to add together those pairs of addends. They're three-digit this time, and for Number 3, we've got a worded problem for you to have a go at. I'll read it to you now. "Sophia is off to visit cousins who live a long way away. She travels with her family by car. They travel 156 kilometres before stopping to get some lunch. Then, they drive a further 203 kilometres without stopping. How many kilometres did they travel on the journey?" Okay, pause the video here and have a go at those practise questions. I'll be back in a little while to give you some feedback. Good luck. Okay, are you ready to do some marking and to see how you got on? Let's go for it. Number 1A and B are displayed there. You've got lots of the missing numbers there. The answers were 550 and 557. Pause the video here and do some marking. Okay, let's do Number 2. 2A was 720. 2B was 727. 2C was 810. And 2D was 372. Pause the video here so you can mark accurately. Finally, Number 3, you had to add together 156 kilometres and 203 kilometres. You would've redistributed by taking 3 from 203 and adding it to 156, transforming the calculation into 159 plus 200 kilometres. The final answer was 359 kilometres. Did you get it? I hope so. Thanks very much for participating in today's lesson. Here's a summary of the things that we have learned or practised. Redistribution can be used to transform a calculation to make it a more efficient mental addition. It can be achieved by subtracting 1s or 10s from one addend and redistributing them to the other addend to create a multiple of 10 or 100. That's easier to count on from. It works for two-digit and three-digit numbers where one of the addends is close to a 10 or 100 boundary. Thanks again for enjoying this lesson with me. My name's Mr. Tasuman, and I hope to see you again soon for another maths lesson.
{"url":"https://www.thenational.academy/pupils/programmes/maths-primary-year-3/units/informal-and-mental-strategies-for-adding-and-subtracting-two-3-digit-numbers/lessons/add-2-and-3-digit-numbers-by-redistributing/video","timestamp":"2024-11-03T18:37:33Z","content_type":"text/html","content_length":"127836","record_id":"<urn:uuid:83b479bc-afae-43d6-8914-de4c0f9ca077>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00087.warc.gz"}
The classes below provide an framework to manipulate geometric data such as points, tangent vectors, isometries, etc. All the core files are geometry independent. In particular, various methods need be overwritten for each geometry. Remark. For these classes, the constructor should a priori be different for each geometry. In practice it often delegates the task to a build method — see Isometry.build() — that can be overwritten easily unlike the constructor. Another way to do would have been to implement for each geometry a new child class. However it would produce a problem of simultaneous inheritance — see for instance the Position() class whose methods may return an Isometry().
{"url":"https://3-dimensional.space/doc/ref/core/geometry.html","timestamp":"2024-11-04T10:59:49Z","content_type":"text/html","content_length":"80623","record_id":"<urn:uuid:dc943684-c04f-4ee5-b406-6a7cbbe86d0f>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00094.warc.gz"}
For this problem, carry at least four digits after the decimal in your calculations. Answers may... For this problem, carry at least four digits after the decimal in your calculations. Answers may... For this problem, carry at least four digits after the decimal in your calculations. Answers may vary slightly due to rounding. A random sample of 5562 physicians in Colorado showed that 3116 provided at least some charity care (i.e., treated poor people at no cost). (a) Let p represent the proportion of all Colorado physicians who provide some charity care. Find a point estimate for p. (Round your answer to four decimal places.) (b) Find a 99% confidence interval for p. (Round your answers to three decimal places.) lower limit upper limit Give a brief explanation of the meaning of your answer in the context of this problem. 99% of all confidence intervals would include the true proportion of Colorado physicians providing at least some charity care. 99% of the confidence intervals created using this method would include the true proportion of Colorado physicians providing at least some charity care. 1% of the confidence intervals created using this method would include the true proportion of Colorado physicians providing at least some charity care. 1% of all confidence intervals would include the true proportion of Colorado physicians providing at least some charity care. (c) Is the normal approximation to the binomial justified in this problem? Explain. Yes; np < 5 and nq < 5. No; np > 5 and nq < 5. No; np < 5 and nq > 5. Yes; np > 5 and nq > 5. The statistical software output for this problem is: a) Point estimate = 0.5602 b) 99% confidence interval: (0.543, 0.577) 99% of the confidence intervals created using this method would include the true proportion of Colorado physicians providing at least some charity care. Option B is correct. c) Yes; np > 5 and nq > 5
{"url":"https://justaaa.com/statistics-and-probability/106014-for-this-problem-carry-at-least-four-digits-after","timestamp":"2024-11-06T08:27:38Z","content_type":"text/html","content_length":"43813","record_id":"<urn:uuid:76c82f39-736b-4fc7-933d-9e33bc712cde>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00023.warc.gz"}
MCODE: random numbers 04-20-2019, 09:43 PM Post: #1 RobertM Posts: 75 Member Joined: Jul 2016 MCODE: random numbers While playing around with 41C MCODE, I've found a need for generating random numbers. Currently, I use the classic ran = INT(seed * n) seed = FRC((seed * 9821) + .211327) to get a random number between 1 <= ran <= n. I perform these calcs using the OS system calls (MP2-10, INTFRC), and because I'm using binary numbers (not BCD) I also have to convert to/from binary to bcd using OS system calls (BCDBIN and GENNUM). I've recently been pondering the inefficiency of all that and wondering if there is a better way to implement random numbers in MCODE using binary directly. Of course, the 41C CPU doesn't have multiplication or division as part of it's repertoire, and I know that it can be simulated with shifting, repeated addition and repeated subtraction, but I haven't gone down those rabbit holes very I was considering doing some form of XORSHIFT algorithm for the random number generation, and then (and this is the part I'm really unsure of) some smart version of shifting and subtraction to do the modulo portion, and wondering if anyone has experience or ideas about random number generation in binary, particularly without multiply/divide? I'm considering this for "simulation" usage, so I don't need a super long period, or cryptographically safe algorithm. 04-20-2019, 10:12 PM Post: #2 John Keith Posts: 1,077 Senior Member Joined: Dec 2013 RE: MCODE: random numbers should meet your needs. It may need some testing to make sure there are no unwanted side-effects of the 41's 56-bit word size. 04-20-2019, 10:40 PM Post: #3 RobertM Posts: 75 Member Joined: Jul 2016 RE: MCODE: random numbers (04-20-2019 10:12 PM)John Keith Wrote: XORSHIFT+ should meet your needs. It may need some testing to make sure there are no unwanted side-effects of the 41's 56-bit word size. Thanks John. I had looked at that, and wasn't sure I wanted to figure out the proper shift values (23/17/26) for a 56 bit word. I was actually thinking of just the simple XORSHIFT32 (inside a 56 bit Any thoughts on the how to algorithmically implement the "modulo(n)" portion without a div instruction? 04-21-2019, 01:35 AM Post: #4 Albert Chan Posts: 2,789 Senior Member Joined: Jul 2018 RE: MCODE: random numbers (04-20-2019 10:40 PM)RobertM Wrote: Any thoughts on the how to algorithmically implement the "modulo(n)" portion without a div instruction? If you don't mind tiny non-biased random modulo, it can be done with a multiply and a shift. see first version of Non-biased modulo is not much harder, see the second version. This required some random number samples to be rejected. Also, I had conversations with Lemire about how divisionless random code work On the last post, there is a link to a paper about it. 04-21-2019, 02:03 AM Post: #5 RobertM Posts: 75 Member Joined: Jul 2016 RE: MCODE: random numbers (04-21-2019 01:35 AM)Albert Chan Wrote: If you don't mind tiny non-biased random modulo, it can be done with a multiply and a shift. see first version of random_bounded() in https://lemire.me/blog/2016/06/30/fast-r...shuffling/ Non-biased modulo is not much harder, see the second version. This required some random number samples to be rejected. Also, I had conversations with Lemire about how divisionless random code work. On the last post, there is a link to a paper about it. Albert, Thanks for this information. Just what I was looking for! 05-03-2019, 09:21 PM Post: #6 Mark Power Posts: 108 Member Joined: Dec 2013 RE: MCODE: random numbers I wrote this up for HPCC Datafile Volume 6 Number 8 December 1987. It might be of some use. My apologies for the scan, but it seems like it is the only electronic copy of the article that I have. User(s) browsing this thread: 1 Guest(s)
{"url":"https://hpmuseum.org/forum/thread-12838-post-115986.html#pid115986","timestamp":"2024-11-14T13:35:12Z","content_type":"application/xhtml+xml","content_length":"33142","record_id":"<urn:uuid:703017ee-56d6-4554-8ffb-664ca902e08a>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00807.warc.gz"}
Shortest Path Problem In graph theory, the shortest path problem is the problem of finding a path between two vertices (or nodes) in a graph such that the sum of the weights of its constituent edges is minimized. This is analogous to the problem of finding the shortest path between two intersections on a road map: the graph's vertices correspond to intersections and the edges correspond to road segments, each weighted by the length of its road segment. Read more about Shortest Path Problem: Definition, Algorithms, Roadnetworks, Applications, Related Problems, Linear Programming Formulation Famous quotes containing the words shortest, path and/or problem: “I have simplified my politics into an utter detestation of all existing governments; and, as it is the shortest and most agreeable and summary feeling imaginable, the first moment of an universal republic would convert me into an advocate for single and uncontradicted despotism. The fact is, riches are power, and poverty is slavery all over the earth, and one sort of establishment is no better, nor worse, for a people than another.” —George Gordon Noel Byron (1788 1824) “We are the pioneers of the world; the advance-guard, sent on through the wilderness of untried things, to break a new path in the New World that is ours.” —Herman Melville (1819 1891) “I don t have any problem with a reporter or a news person who says the President is uninformed on this issue or that issue. I don t think any of us would challenge that. I do have a problem with the singular focus on this, as if that s the only standard by which we ought to judge a president. What we learned in the last administration was how little having an encyclopedic grasp of all the facts has to do with governing.” —David R. Gergen (b. 1942) Related Phrases Related Words
{"url":"https://www.liquisearch.com/shortest_path_problem","timestamp":"2024-11-06T18:59:55Z","content_type":"text/html","content_length":"6791","record_id":"<urn:uuid:0b4c2e21-31f0-4720-9d39-dbc399ff03da>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00155.warc.gz"}
Futures Pricing Equations: Understanding and Application B.5.3 Futures Pricing Equations Futures contracts are essential instruments in the financial markets, providing a mechanism for hedging, speculation, and price discovery. Understanding how futures prices are determined is crucial for anyone involved in trading or managing financial risk. This section delves into the futures pricing equations, focusing on the cost-of-carry model, and explores the relationship between spot and futures prices. We will also discuss the factors that can cause deviations from theoretical prices and the implications for market participants. Understanding Futures Pricing Futures pricing is fundamentally based on the concept of arbitrage. Arbitrage ensures that the futures price aligns with the theoretical price derived from the cost-of-carry model. The cost-of-carry model incorporates the costs and benefits of holding the underlying asset until the delivery date of the futures contract. The Futures Pricing Formula The formula for calculating the theoretical futures price is given by: $$ F = S_0 e^{(r + s - q) T} $$ • \( F \) = Futures price • \( S_0 \) = Spot price of the underlying asset • \( r \) = Risk-free interest rate • \( s \) = Storage costs (if applicable) • \( q \) = Income yield (dividends or convenience yield) • \( T \) = Time to delivery This formula captures the essence of the cost-of-carry model, which considers the costs of financing, storage, and the benefits of holding the asset. Numerical Example Let’s consider a practical example to illustrate the application of the futures pricing formula: • Spot price of gold: $1,500 per ounce • Risk-free rate: 2% per annum • Storage cost: 0.5% per annum • Time to delivery: 6 months (\( T = 0.5 \)) Using the futures pricing formula: $$ F = \$1,500 \times e^{(0.02 + 0.005) \times 0.5} \approx \$1,500 \times e^{0.0125} \approx \$1,518.85 $$ This calculation shows that the theoretical futures price of gold for delivery in six months is approximately $1,518.85 per ounce. The Role of Arbitrage in Futures Pricing Arbitrage plays a critical role in ensuring that futures prices remain aligned with their theoretical values. When the actual futures price deviates from the theoretical price, arbitrageurs can exploit these discrepancies to earn risk-free profits. This process involves buying the undervalued asset and selling the overvalued asset, thereby driving the prices back into alignment. Arbitrage Example Consider a scenario where the actual futures price of gold is $1,530, higher than the theoretical price of $1,518.85. An arbitrageur could: 1. Buy gold in the spot market at $1,500. 2. Sell gold futures at $1,530. 3. Hold the gold until the futures contract matures, incurring storage and financing costs. 4. Deliver the gold against the futures contract at maturity. The arbitrageur profits from the difference between the futures price and the cost of carrying the gold, ensuring that the futures price aligns with the theoretical price. Factors Influencing Deviations from Theoretical Prices While arbitrage helps maintain the alignment between actual and theoretical futures prices, several factors can cause deviations. These include: Market Imperfections Market imperfections such as transaction costs, taxes, and regulatory constraints can prevent arbitrageurs from fully exploiting price discrepancies. These imperfections can lead to persistent deviations between actual and theoretical prices. Transaction Costs Transaction costs, including brokerage fees and bid-ask spreads, can erode the profits from arbitrage opportunities, making it less attractive for arbitrageurs to engage in the necessary trades to correct price discrepancies. Liquidity Constraints In markets with low liquidity, the ability to execute large trades without significantly impacting prices is limited. This constraint can hinder arbitrage activities and allow deviations from theoretical prices to persist. Risk and Uncertainty Uncertainty about future market conditions, such as changes in interest rates or unexpected economic events, can affect the risk perceptions of market participants. This uncertainty can lead to risk premiums being incorporated into futures prices, causing deviations from theoretical values. The Importance of Futures Pricing in Hedging and Speculation Understanding futures pricing is vital for both hedgers and speculators. For hedgers, accurately pricing futures contracts ensures effective risk management by locking in prices and protecting against adverse price movements. For speculators, identifying mispriced futures contracts can provide opportunities for profit. Hedging Strategies Hedgers use futures contracts to mitigate the risk of adverse price movements in the underlying asset. By locking in a future price, they can stabilize cash flows and protect against volatility. Accurate futures pricing is essential for designing effective hedging strategies that align with the firm’s risk management objectives. Speculative Strategies Speculators seek to profit from price movements in futures contracts. By analyzing the relationship between spot and futures prices, speculators can identify opportunities for arbitrage or directional trades. Understanding the factors that influence futures pricing enables speculators to make informed decisions and capitalize on market inefficiencies. Futures pricing equations are a cornerstone of financial markets, providing a framework for understanding the relationship between spot and futures prices. The cost-of-carry model, embodied in the futures pricing formula, captures the essential components of pricing, including financing costs, storage costs, and income yields. While arbitrage ensures alignment between actual and theoretical prices, market imperfections and other factors can lead to deviations. For market participants, understanding these dynamics is crucial for effective hedging and speculative strategies. Quiz Time! 📚✨ Quiz Time! ✨📚 ### What is the primary role of arbitrage in futures pricing? - [x] To ensure futures prices align with theoretical prices - [ ] To increase transaction costs - [ ] To create market imperfections - [ ] To reduce liquidity constraints > **Explanation:** Arbitrage helps align futures prices with theoretical prices by exploiting price discrepancies for risk-free profits. ### Which component of the futures pricing formula accounts for dividends or convenience yield? - [ ] \\( S_0 \\) - [ ] \\( r \\) - [ ] \\( s \\) - [x] \\( q \\) > **Explanation:** The component \\( q \\) in the futures pricing formula represents the income yield, such as dividends or convenience yield. ### In the futures pricing formula, what does \\( T \\) represent? - [ ] Spot price - [ ] Risk-free rate - [ ] Storage costs - [x] Time to delivery > **Explanation:** \\( T \\) represents the time to delivery of the futures contract in the pricing formula. ### What happens when the actual futures price is higher than the theoretical price? - [x] Arbitrageurs sell futures and buy the underlying asset - [ ] Arbitrageurs buy futures and sell the underlying asset - [ ] Arbitrageurs do nothing - [ ] Arbitrageurs increase transaction costs > **Explanation:** Arbitrageurs sell the overpriced futures and buy the underlying asset to profit from the price discrepancy. ### Which factor can cause deviations from theoretical futures prices? - [ ] Perfect market conditions - [ ] Absence of transaction costs - [x] Market imperfections - [ ] High liquidity > **Explanation:** Market imperfections, such as transaction costs and regulatory constraints, can cause deviations from theoretical prices. ### How does liquidity constraint affect futures pricing? - [ ] It increases arbitrage opportunities - [x] It limits the ability to execute large trades - [ ] It reduces transaction costs - [ ] It eliminates market imperfections > **Explanation:** Liquidity constraints limit the ability to execute large trades without impacting prices, affecting arbitrage activities. ### What is the theoretical futures price of an asset with a spot price of \$1,000, risk-free rate of 3%, storage cost of 1%, and time to delivery of 1 year? - [x] \$1,040.81 - [ ] \$1,030.00 - [ ] \$1,010.00 - [ ] \$1,050.00 > **Explanation:** Using the formula \\( F = S_0 e^{(r + s) T} \\), the theoretical futures price is calculated as \$1,040.81. ### What is the impact of transaction costs on arbitrage? - [ ] They increase arbitrage profits - [x] They erode arbitrage profits - [ ] They have no impact - [ ] They eliminate market imperfections > **Explanation:** Transaction costs erode arbitrage profits, making it less attractive to exploit price discrepancies. ### Which strategy involves using futures contracts to mitigate risk? - [ ] Speculation - [x] Hedging - [ ] Arbitrage - [ ] Market making > **Explanation:** Hedging involves using futures contracts to mitigate the risk of adverse price movements. ### True or False: Speculators primarily use futures contracts to stabilize cash flows. - [ ] True - [x] False > **Explanation:** Speculators use futures contracts to profit from price movements, not to stabilize cash flows.
{"url":"https://csccourse.ca/32/7/3/","timestamp":"2024-11-07T09:09:44Z","content_type":"text/html","content_length":"117811","record_id":"<urn:uuid:e29244a8-7cb4-4de1-8552-13c1477f5e11>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00595.warc.gz"}
In a frequency histogram [...] are shown on the horizontal (x) axis. the classes (intervals) In a frequency histogram [...] are shown on the horizontal (x) axis. In a frequency histogram [...] are shown on the horizontal (x) axis. the classes (intervals) If you want to change selection, open document below and click on "Move attachment" Subject 3. Frequency Distributions polygons. 1. A histogram is a bar chart that displays a frequency distribution. It is constructed as follows: The class frequencies are shown on the vertical (y) axis (by the heights of bars drawn next to each other). <span> The classes (intervals) are shown on the horizontal (x) axis. There is no space between the bars. From a histogram, we can see quickly where most of the observations lie. The shapes of histograms will vary, depending on th status not learned measured difficulty 37% [default] last interval [days] repetition number in this series 0 memorised on scheduled repetition scheduled repetition interval last repetition or drill No repetitions
{"url":"https://buboflash.eu/bubo5/show-dao2?d=1636472065292","timestamp":"2024-11-10T19:34:16Z","content_type":"text/html","content_length":"36100","record_id":"<urn:uuid:f3b198f8-8dfe-4f88-b323-7a2f9057e30c>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00356.warc.gz"}
Computer Graphics The student should be fluent in C/C++ programming. The goal of this course is to create a foundation (theory and programming) for understanding the current and future technology underlying computer graphics. Our intention is to create a synergistic mixture of theory and practice. The first part of the class begins with introductory lectures into the mathematical fundamentals and workshops in programming 3D graphics. In the second half of the semester, the class moves to current state of the art methods which are presented by the students. Examples of typical subjects which will be covered are: • 3D modeling • 3D lighting & effects • Real time rendering • Advanced applications and systems At the end of the Computer Graphics course, the student should be able to • understand the theoretical/mathematical fundamentals in computer graphics • understand the programming fundamentals in computer graphics • understand the current strengths and weaknesses of 3D graphics algorithms • have insight into ray tracing algorithms • have insight into illumination and rendering • have insight into interactive line and surface models • have insight into high performance computer graphics software systems • have insight into theoretical and practical problems in computer graphics • build a computer graphics program The most recent version of the schedule can be found on the Liacs website • lectures • seminar • student discussions • presentations • software assignments The final grade is composed of: 1. Presentation (30%) 2. Software assignments/workshops (20%) 3. Project or Exam (50%) • All educational materials are supplied digitally. • Optional reading: Computer Graphics Using Open GL by F. S. Hill, Jr. (Prentice-Hall, 2001 or later, ISBN: 0-02-354856-8) 2006 – 3rd Edition: ISBN-13: 978-0131496705 • Research papers from recent ACM conferences and journals Aanmelden via Usis: Selfservice > Sudentencentrum > Inschrijven Activiteitencodes te vinden via de facultaire website Voor studenten die niet staan ingeschreven voor de bachelor Informatica is er een beperkte capaciteit. Neem contact op met de studieadviseur. Onderwijscoördinator Informatica, M. Derogee Voor dit vak schuiven I&E-studenten aan bij de informaticastudenten in het Snelliusgebouw in Leiden.
{"url":"https://studiegids.universiteitleiden.nl/en/courses/33813/computer-graphics","timestamp":"2024-11-10T13:12:13Z","content_type":"text/html","content_length":"17514","record_id":"<urn:uuid:28d43383-4583-49fb-a884-9c045dfb5345>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00449.warc.gz"}
The basic example Next: An annotated bibliography Up: Introduction Previous: Introduction In the simplest case, let us consider an object whose attenuation coefficient with respect to X-rays at the point x is f(x). We scan the cross section by a thin X-ray beam L of unit intensity. The intensity past the object is This intensity is measured, providing us with the line integral The problem is to compute f from g. In principle this problem has been solved by Radon (1917). Let L be the straight line 1.1) can be written as R is known as the Radon transform. Radon's inversion formula reads where g with respect to s and 1.3) solves our problem. So, why do we write an article on tomography? First, inversion formulas such as (1.3) do not exist in all cases. For instance, in emission tomography, the mathematical model involves weighted line integrals, which in general do not admit explicit inversion. Also, even if explicit inversion is possible, it is not obvious how to turn an inversion formula such as (1.3) into an efficient and accurate algorithm. Many problems concerning sampling and discretization arise. Often not all of the data in an explicit inversion formula can be measured. Finally, (1.1) is a prime example for many imaging techniques, and a proper understanding of the inversion of (1.1) is a necessary prerequisite for the understanding of more complicated problems. Frank Wuebbeling Thu Sep 10 10:51:17 MET DST 1998
{"url":"https://www.uni-muenster.de/AMM/num/Preprints/1998/natterer_1/paper.html/node3.html","timestamp":"2024-11-11T11:03:02Z","content_type":"text/html","content_length":"4359","record_id":"<urn:uuid:579fed9e-0562-4e9a-ba75-f12598b372bc>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00442.warc.gz"}
Math Colloquia - Mathematics, Biology and Mathematical Biology The 21st century is the age of life science. Two issues in the life sciences are that humans live long, healthy lives and maintain a steady state of the earth's ecosystems despite disturbances. In this talk, we will look at how mathematics is incorporated into biology with practical examples. The major terminology will be introduced and various cases of mathematics applied to biology will be presented based on the mathematical model. In particular, it has been classified as mathematical biology by integrating the study of conducting this research, which began in the early 20th century, and many scholars are now participating in expanding the breadth and depth of theory and its applications. When mathematics meets biology, it is possible to derive more reasonable and useful information, and to understand that the applicability of mathematics to other disciplines is infinite. Key words: Mathematical models, Mathematical Modeling, Differential equations, Stochastic Equation, Biomathematics, Mathematical Biology
{"url":"http://my.math.snu.ac.kr/board/index.php?mid=colloquia&page=9&sort_index=Time&order_type=desc&document_srl=800701&l=en","timestamp":"2024-11-11T13:53:59Z","content_type":"text/html","content_length":"45042","record_id":"<urn:uuid:73a0096d-708a-41f5-bdf8-3e931629cb3a>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00486.warc.gz"}
[Solved] A saving bond earns a variable rate of in | SolutionInn Answered step by step Verified Expert Solution A saving bond earns a variable rate of interest that can change six months, with compounding done monthly. The initial rate was 6.8% in early A saving bond earns a variable rate of interest that can change six months, with compounding done monthly. The initial rate was 6.8% in early 2015. If that rate continues unchanged for the 3 years of the bond's duration, what is the APY (annual percentage yield) on a $10,000 bond? (Please answer in term of percentage, e.g: 2.5%, and round to 2 decimal places) There are 3 Steps involved in it Step: 1 Get Instant Access to Expert-Tailored Solutions See step-by-step solutions with expert insights and AI powered tools for academic success Ace Your Homework with AI Get the answers you need in no time with our AI-driven, step-by-step assistance Get Started Recommended Textbook for Authors: R. Charles Moyer, James R. McGuigan, Ramesh P. Rao 13th edition 1285198840, 978-1285198842 More Books Students also viewed these Finance questions View Answer in SolutionInn App
{"url":"https://www.solutioninn.com/study-help/questions/a-saving-bond-earns-a-variable-rate-of-interest-that-198735","timestamp":"2024-11-04T20:54:04Z","content_type":"text/html","content_length":"110144","record_id":"<urn:uuid:b80967db-ef59-4cc5-849c-f35b729132d1>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00140.warc.gz"}
GW approximation GW approximation# Modules: pyscf.gw, pyscf.pbc.gw The GW approximation is a Green’s function-based method that calculates charged excitation energies, i.e. ionizations potentials (IPs) and electron affinities (EAs). PySCF implements the G[0]W[0] approximation, in which the self-energy is built with mean-field orbitals and orbital energies. Therefore, the results depend on the mean-field starting point, which can be Hartree-Fock or density functional theory. As described below, PySCF has three implementations of the GW approximation, all of which are “full-frequency”. An example GW calculation is shown below: #!/usr/bin/env python A simple example to run a GW calculation from pyscf import gto, dft, gw mol = gto.M( atom = 'H 0 0 0; F 0 0 1.1', basis = 'ccpvdz') mf = dft.RKS(mol) mf.xc = 'pbe' nocc = mol.nelectron//2 # By default, GW is done with analytic continuation gw = gw.GW(mf) # same as gw = gw.GW(mf, freq_int='ac') In this example, the orbs keyword argument is provided to select which GW orbital energies are requested. By default, all orbital energies (occupied and unoccupied) will be calculated, which will increase the cost and may have errors, depending on the method of frequency integration. Frequency integration# The frequency integration needed for the GW approximation can be done in three ways, controlled by the freq_int keyword argument: by analytic continuation (AC, freq_int='ac'), contour deformation (CD, freq_int='cd'), or exactly (Exact, freq_int='exact'). The first two are much more affordable and typically provide sufficient accuracy. GW-AC supports spin-restricted and spin-unrestricted calculations; GW-CD and GW-Exact only support spin-restricted calculations. Details of the GW-AC and GW-CD implementations in PySCF can be found in Ref. [30]. Analytic continuation# Integration via analytic continuation is implemented in the GWAC module that is accessed with freq_int='ac', which is also the default GW module. GW-AC has \(N^4\) scaling and is recommended for valence states only. The analytic continuation can be done using a Pade approximation (default, more reliable) or a two-pole model, controlled by the ac attribute. GW-AC supports frozen core orbitals for reducing computational cost, controlled by the frozen attribute (number of frozen core MOs neglected in GW-AC calculation). Frozen core orbitals are not supported by other GW methods currently. There are two ways to compute GW orbital energies, controlled by the linearized attribute: linearized=False (default) solves the quasiparticle equation through a Newton solver self-consistently, while linearized=True employs a linearization approximation: mygw = gw.GW(mf) # same as freq_int='ac' or GWAC module # mygw.ac = 'pade' # default mygw.ac = 'twopole' mygw.frozen = 1 # default is None mygw.linearized = False # default Contour deformation# Integration via contour deformation is implemented in the GWCD module that is accessed with freq_int='cd'. GW-CD has \(N^4\) scaling and is slower, but more robust, than GW-AC. GW-CD is particularly recommended for accurate core and high-energy states: mygw = gw.GW(mf, freq_int='cd').run(orbs=[0,1]) Exact frequency integration can be carried out analytically and is implemented in the GWExact module that is accessed with freq_int='exact'. Exact integration requires complete diagonalization of the RPA matrix, which has \(N^6\) scaling. However, all orbital energies can be readily obtained without error: mygw = gw.GW(mf, freq_int='exact').run() By default, GW-Exact, like the other GW implementations, use the direct random-phase approximation (dRPA) to screen the Coulomb interaction. Within GW-Exact, any alternative time-dependent mean-field theory (TDHF, TDDFT, etc.) can be also used. The instance of an executed tdscf method can be provided as a keyword argument: #!/usr/bin/env python GW calculation with exact frequency integration and TDDFT screening instead of dRPA from pyscf import gto, dft, gw mol = gto.M( atom = 'H 0 0 0; F 0 0 1.1', basis = 'ccpvdz') mf = dft.RKS(mol) mf.xc = 'pbe' from pyscf import tdscf nocc = mol.nelectron//2 nmo = mf.mo_energy.size nvir = nmo-nocc td = tdscf.TDDFT(mf) td.nstates = nocc*nvir td.verbose = 0 gw = gw.GW(mf, freq_int='exact', tdmf=td)
{"url":"https://pyscf.org/user/gw.html","timestamp":"2024-11-06T14:34:10Z","content_type":"text/html","content_length":"35468","record_id":"<urn:uuid:75ea1471-3432-4826-bc68-815fcbeb72f1>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00457.warc.gz"}
Power in the Frequency Domain - Electrical Engineering Textbooks Power in the Frequency Domain Recalling that the instantaneous power consumed by a circuit element or an equivalent circuit that represents a collection of elements equals the voltage times the current entering the positive-voltage terminal, , what is the equivalent expression using impedances? The resulting calculation reveals more about power consumption in circuits and the introduction of the concept of average power. When all sources produce sinusoids of frequency , the voltage and current for any circuit element or collection of elements are sinusoids of the same frequency. Here, the complex amplitude of the voltage and that of the current is . We can also write the voltage and current in terms of their complex amplitudes using Euler's formula. Multiplying these two expressions and simplifying gives We define to be complex power. The real-part of complex power is the first term and since it does not change with time, it represents the power consistently consumed/produced by the circuit. The second term varies with time at a frequency twice that of the source. Conceptually, this term details how power "sloshes" back and forth in the circuit because of the sinusoidal source. From another viewpoint, the real-part of complex power represents long-term energy consumption/production. Energy is the integral of power and, as the integration interval increases, the first term appreciates while the time-varying term "sloshes." Consequently, the most convenient definition of the average power consumed/produced by any circuit is in terms of complex amplitudes. Suppose the complex amplitudes of the voltage and current have fixed magnitudes. What phase relationship between voltage and current maximizes the average power? In other words, how are related for maximum power dissipation? For maximum power dissipation, the imaginary part of complex power should be zero. As the complex power is given by , zero imaginary part occurs when the phases of the voltage and currents agree. Because the complex amplitudes of the voltage and current are related by the equivalent impedance, average power can also be written as These expressions generalize the results we obtained for resistor circuits. We have derived a fundamental result: Only the real part of impedance contributes to long-term power dissipation. Of the circuit elements, only the resistor dissipates power. Capacitors and inductors dissipate no power in the long term. It is important to realize that these statements apply only for sinusoidal sources. If you turn on a constant voltage source in an RC-circuit, charging the capacitor does consume power. In an earlier problem, we found that the rms value of a sinusoid was its amplitude divided by . What is average power expressed in terms of the rms values of the voltage and current ( . The cosine term is known as the power factor. Explore CircuitBread Get the latest tools and tutorials, fresh from the toaster.
{"url":"https://www.circuitbread.com/textbooks/fundamentals-of-electrical-engineering-i/analog-signal-processing/power-in-the-frequency-domain","timestamp":"2024-11-07T23:43:11Z","content_type":"text/html","content_length":"939584","record_id":"<urn:uuid:f88b0c1b-2570-415e-9671-61bcc7fdb810>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00087.warc.gz"}
Yet Another Haskell Tutorial/Language advanced - Wikibooks, open books for an open world Sections and Infix Operators We've already seen how to double the values of elements in a list using map: Prelude> map (\x -> x*2) [1,2,3,4] However, there is a more concise way to write this: Prelude> map (*2) [1,2,3,4] This type of thing can be done for any infix function: Prelude> map (+5) [1,2,3,4] Prelude> map (/2) [1,2,3,4] Prelude> map (2/) [1,2,3,4] You might be tempted to try to subtract values from elements in a list by mapping -2 across a list. This won't work, though, because while the + in +2 is parsed as the standard plus operator (as there is no ambiguity), the - in -2 is interpreted as the unary minus, not the binary minus. Thus -2 here is the number ${\displaystyle -2}$ , not the function ${\displaystyle \lambda x.x-2}$ . In general, these are called sections. For binary infix operators (like +), we can cause the function to become prefix by enclosing it in parentheses. For example: Prelude> (+) 5 3 Prelude> (-) 5 3 Additionally, we can provide either of its argument to make a section. For example: Prelude> (+5) 3 Prelude> (/3) 6 Prelude> (3/) 6 Non-infix functions can be made infix by enclosing them in backquotes ("\`"). For example: Prelude> (+2) `map` [1..10] Prelude> (`map` [1..10]) (+2) Recall back from the section on Functions, there are many computations which require using the result of the same computation in multiple places in a function. There, we considered the function for computing the roots of a quadratic polynomial: roots a b c = ((-b + sqrt(b*b - 4*a*c)) / (2*a), (-b - sqrt(b*b - 4*a*c)) / (2*a)) In addition to the let bindings introduced there, we can do this using a where clause. where clauses come immediately after function definitions and introduce a new level of layout (see the section on Layout). We write this as: roots a b c = ((-b + det) / (2*a), (-b - det) / (2*a)) where det = sqrt(b*b-4*a*c) Any values defined in a where clause shadow any other values with the same name. For instance, if we had the following code block: det = "Hello World" roots a b c = ((-b + det) / (2*a), (-b - det) / (2*a)) where det = sqrt(b*b-4*a*c) f _ = det The value of roots doesn't notice the top-level declaration of det, since it is shadowed by the local definition (the fact that the types don't match doesn't matter either). Furthermore, since f cannot "see inside" of roots, the only thing it knows about det is what is available at the top level, which is the string "Hello World." We could also pull out the 2*a computation and get the following code: roots a b c = ((-b + det) / (a2), (-b - det) / (a2)) where det = sqrt(b*b-4*a*c) a2 = 2*a Sub-expressions in where clauses must come after function definitions. Sometimes it is more convenient to put the local definitions before the actual expression of the function. This can be done by using let/in clauses. We have already seen let clauses; where clauses are virtually identical to their let clause cousins except for their placement. The same roots function can be written using let roots a b c = let det = sqrt (b*b - 4*a*c) a2 = 2*a in ((-b + det) / a2, (-b - det) / a2) Using a where clause, it looks like: roots a b c = ((-b + det) / a2, (-b - det) / a2) det = sqrt (b*b - 4*a*c) a2 = 2*a These two types of clauses can be mixed (i.e., you can write a function which has both a let cause and a where clause). This is strongly advised against, as it tends to make code difficult to read. However, if you choose to do it, values in the let clause shadow those in the where clause. So if you define the function: f x = let y = x+1 in y where y = x+2 The value of f 5 is 6, not 7. Of course, I plead with you to never ever write code that looks like this. No one should have to remember this rule and by shadowing where-defined values in a let clause only makes your code difficult to understand. In general, whether you should use let clauses or where clauses is largely a matter of personal preference. Usually, the names you give to the subexpressions should be sufficiently expressive that without reading their definitions any reader of your code should be able to figure out what they do. In this case, where clauses are probably more desirable because they allow the reader to see immediately what a function does. However, in real life, values are often given cryptic names. Partial application is when you take a function which takes ${\displaystyle n}$ arguments and you supply it with ${\displaystyle <n}$ of them. When discussing Sections, we saw a form of "partial application" in which functions like + were partially applied. For instance, in the expression map (+1) [1,2,3], the section (+1) is a partial application of +. This is because + really takes two arguments, but we've only given it one. Partial application is very common in function definitions and sometimes goes by the name "η (eta) reduction". For instance, suppose we are writing a function lcaseString which converts a whole string into lower case. We could write this as: lcaseString s = map toLower s Here, there is no partial application (though you could argue that applying no arguments to toLower could be considered partial application). However, we notice that the application of s occurs at the end of both lcaseString and of map toLower. In fact, we can remove it by performing eta reduction, to get: lcaseString = map toLower Now, we have a partial application of map: it expects a function and a list, but we've only given it the function. This all is related to the type of map, which is (a -> b) -> ([a] -> [b]), when all parentheses are included. In our case, toLower is of type Char -> Char. Thus, if we supply this function to map, we get a function of type [Char] -> [Char], as desired. Now, consider the task of converting a string to lowercase and removing all non letter characters. We might write this as: lcaseLetters s = map toLower (filter isAlpha s) But note that we can actually write this in terms of function composition: lcaseLetters s = (map toLower . filter isAlpha) s And again, we're left with an eta reducible function: lcaseLetters = map toLower . filter isAlpha Writing functions in this style is very common among advanced Haskell users. In fact it has a name: point-free programming (not to be confused with pointless programming). It is called point free because in the original definition of lcaseLetters, we can think of the value s as a point on which the function is operating. By removing the point from the function definition, we have a point-free A function similar to (.) is ($). Whereas (.) is function composition, ($) is function application. The definition of ($) from the Prelude is very simple: f $ x = f x However, this function is given very low fixity, which means that it can be used to replace parentheses. For instance, we might write a function: foo x y = bar y (baz (fluff (ork x))) However, using the function application function, we can rewrite this as: foo x y = bar y $ baz $ fluff $ ork x This moderately resembles the function composition syntax. The ($) function is also useful when combined with other infix functions. For instance, we cannot write: Prelude> putStrLn "5+3=" ++ show (5+3) because this is interpreted as (putStrLn "5+3=") ++ (show (5+3)), which makes no sense. However, we can fix this by writing instead: Prelude> putStrLn $ "5+3=" ++ show (5+3) Which works fine. Consider now the task of extracting from a list of tuples all the ones whose first component is greater than zero. One way to write this would be: fstGt0 l = filter (\ (a,b) -> a>0) l We can first apply eta reduction to the whole function, yielding: fstGt0 = filter (\ (a,b) -> a>0) Now, we can rewrite the lambda function to use the fst function instead of the pattern matching: fstGt0 = filter (\x -> fst x > 0) Now, we can use function composition between fst and > to get: fstGt0 = filter (\x -> ((>0) . fst) x) And finally we can eta reduce: fstGt0 = filter ((>0).fst) This definition is simultaneously shorter and easier to understand than the original. We can clearly see exactly what it is doing: we're filtering a list by checking whether something is greater than zero. What are we checking? The fst element. While converting to point free style often results in clearer code, this is of course not always the case. For instance, converting the following map to point free style yields something nearly foo = map (\x -> sqrt (3+4*(x^2))) foo = map (sqrt . (3+) . (4*) . (^2)) There are a handful of combinators defined in the Prelude which are useful for point free programming: • uncurry takes a function of type a -> b -> c and converts it into a function of type (a,b) -> c. This is useful, for example, when mapping across a list of pairs: Prelude> map (uncurry (*)) [(1,2),(3,4),(5,6)] • curry is the opposite of uncurry and takes a function of type (a,b) -> c and produces a function of type a -> b -> c. • flip reverse the order of the first two arguments to a function. That is, it takes a function of type a -> b -> c and produces a function of type b -> a -> c. For instance, we can sort a list in reverse order by using flip compare: Prelude> List.sortBy compare [5,1,8,3] Prelude> List.sortBy (flip compare) [5,1,8,3] This is the same as saying: Prelude> List.sortBy (\a b -> compare b a) [5,1,8,3] only shorter. Of course, not all functions can be written in point free style. For instance: square x = x*x Cannot be written in point free style, without some other combinators. For instance, if we can define other functions, we can write: pair x = (x,x) square = uncurry (*) . pair But in this case, this is not terribly useful. Convert the following functions into point-free style, if possible. func1 x l = map (\y -> y*x) l func2 f g l = filter f (map g l) func3 f l = l ++ map f l func4 l = map (\y -> y+2) (filter (\z -> z `elem` [1..10]) func5 f l = foldr (\x y -> f (y,x)) 0 l Pattern matching is one of the most powerful features of Haskell (and most functional programming languages). It is most commonly used in conjunction with case expressions, which we have already seen in the section on Functions. Let's return to our Color example from the section on Datatypes. I'll repeat the definition we already had for the datatype: data Color = Red | Orange | Yellow | Green | Blue | Purple | White | Black | Custom Int Int Int -- R G B components deriving (Show,Eq) We then want to write a function that will convert between something of type Color and a triple of Ints, which correspond to the RGB values, respectively. Specifically, if we see a Color which is Red, we want to return (255,0,0), since this is the RGB value for red. So we write that (remember that piecewise function definitions are just case statements): colorToRGB Red = (255,0,0) If we see a Color which is Orange, we want to return (255,128,0); and if we see Yellow, we want to return (255,255,0), and so on. Finally, if we see a custom color, which is comprised of three components, we want to make a triple out of these, so we write: colorToRGB Orange = (255,128,0) colorToRGB Yellow = (255,255,0) colorToRGB Green = (0,255,0) colorToRGB Blue = (0,0,255) colorToRGB Purple = (255,0,255) colorToRGB White = (255,255,255) colorToRGB Black = (0,0,0) colorToRGB (Custom r g b) = (r,g,b) Then, in our interpreter, if we type: Color> colorToRGB Yellow What is happening is this: we create a value, call it ${\displaystyle x}$ , which has value Yellow. We then apply this to colorToRGB. We check to see if we can "match" ${\displaystyle x}$ against Red. This match fails because according to the definition of Eq{Color}, Red is not equal to Yellow. We continue down the definitions of colorToRGB and try to match Yellow against Orange. This fails, too. We the try to match Yellow against Yellow, which succeeds, so we use this function definition, which simply returns the value (255,255,0), as expected. Suppose instead, we used a custom color: Color> colorToRGB (Custom 50 200 100) We apply the same matching process, failing on all values from Red to Black. We then get to try to match Custom 50 200 100 against Custom r g b. We can see that the Custom part matches, so then we go see if the subelements match. In the matching, the variables r, g and b are essentially wild cards, so there is no trouble matching r with 50, g with 200 and b with 100. As a "side-effect" of this matching, r gets the value 50, g gets the value 200 and b gets the value 100. So the entire match succeeded and we look at the definition of this part of the function and bundle up the triple using the matched values of r, g and b. We can also write a function to check to see if a Color is a custom color or not: isCustomColor (Custom _ _ _) = True isCustomColor _ = False When we apply a value to isCustomColor it tries to match that value against Custom _ _ _. This match will succeed if the value is Custom x y z for any x, y and z. The _ (underscore) character is a "wildcard" and will match anything, but will not do the binding that would happen if you put a variable name there. If this match succeeds, the function returns True; however, if this match fails, it goes on to the next line, which will match anything and then return False. For some reason we might want to define a function which tells us whether a given color is "bright" or not, where my definition of "bright" is that one of its RGB components is equal to 255 (admittedly an arbitrary definition, but it's simply an example). We could define this function as: isBright = isBright' . colorToRGB where isBright' (255,_,_) = True isBright' (_,255,_) = True isBright' (_,_,255) = True isBright' _ = False Let's dwell on this definition for a second. The isBright function is the composition of our previously defined function colorToRGB and a helper function isBright', which tells us if a given RGB value is bright or not. We could replace the first line here with isBright c = isBright' (colorToRGB c) but there is no need to explicitly write the parameter here, so we don't. Again, this function composition style of programming takes some getting used to, so I will try to use it frequently in this tutorial. The isBright' helper function takes the RGB triple produced by colorToRGB. It first tries to match it against (255,_,_) which succeeds if the value has 255 in its first position. If this match succeeds, isBright' returns True and so does isBright. The second and third line of definition check for 255 in the second and third position in the triple, respectively. The fourth line, the fallthrough, matches everything else and reports it as not bright. We might want to also write a function to convert between RGB triples and Colors. We could simply stick everything in a Custom constructor, but this would defeat the purpose; we want to use the Custom slot only for values which don't match the predefined colors. However, we don't want to allow the user to construct custom colors like (600,-40,99) since these are invalid RGB values. We could throw an error if such a value is given, but this can be difficult to deal with. Instead, we use the Maybe datatype. This is defined (in the Prelude) as: data Maybe a = Nothing | Just a The way we use this is as follows: our rgbToColor function returns a value of type Maybe Color. If the RGB value passed to our function is invalid, we return Nothing, which corresponds to a failure. If, on the other hand, the RGB value is valid, we create the appropriate Color value and return Just that. The code to do this is: rgbToColor 255 0 0 = Just Red rgbToColor 255 128 0 = Just Orange rgbToColor 255 255 0 = Just Yellow rgbToColor 0 255 0 = Just Green rgbToColor 0 0 255 = Just Blue rgbToColor 255 0 255 = Just Purple rgbToColor 255 255 255 = Just White rgbToColor 0 0 0 = Just Black rgbToColor r g b = if 0 <= r && r <= 255 && 0 <= g && g <= 255 && 0 <= b && b <= 255 then Just (Custom r g b) else Nothing -- invalid RGB value The first eight lines match the RGB arguments against the predefined values and, if they match, rgbToColor returns Just the appropriate color. If none of these matches, the last definition of rgbToColor matches the first argument against r, the second against g and the third against b (which causes the side-effect of binding these values). It then checks to see if these values are valid (each is greater than or equal to zero and less than or equal to 255). If so, it returns Just (Custom r g b); if not, it returns Nothing corresponding to an invalid color. Using this, we can write a function that checks to see if a right RGB value is valid: rgbIsValid r g b = rgbIsValid' (rgbToColor r g b) where rgbIsValid' (Just _) = True rgbIsValid' _ = False Here, we compose the helper function rgbIsValid' with our function rgbToColor. The helper function checks to see if the value returned by rgbToColor is Just anything (the wildcard). If so, it returns True. If not, it matches anything and returns False. Pattern matching isn't magic, though. You can only match against datatypes; you cannot match against functions. For instance, the following is invalid: f x = x + 1 g (f x) = x Even though the intended meaning of g is clear (i.e., g x = x - 1), the compiler doesn't know in general that f has an inverse function, so it can't perform matches like this. Guards can be thought of as an extension to the pattern matching facility. They enable you to allow piecewise function definitions to be taken according to arbitrary boolean expressions. Guards appear after all arguments to a function but before the equals sign, and are begun with a vertical bar. We could use guards to write a simple function which returns a string telling you the result of comparing two elements: comparison x y | x < y = "The first is less" | x > y = "The second is less" | otherwise = "They are equal" You can read the vertical bar as "such that." So we say that the value of comparison x y "such that" x is less than y is "The first is less." The value such that x is greater than y is "The second is less" and the value otherwise is "They are equal". The keyword otherwise is simply defined to be equal to True and thus matches anything that falls through that far. So, we can see that this works: Guards> comparison 5 10 "The first is less" Guards> comparison 10 5 "The second is less" Guards> comparison 7 7 "They are equal" Guards are applied in conjunction with pattern matching. When a pattern matches, all of its guards are tried, consecutively, until one matches. If none match, then pattern matching continues with the next pattern. One nicety about guards is that where clauses are common to all guards. So another possible definition for our isBright function from the previous section would be: isBright2 c | r == 255 = True | g == 255 = True | b == 255 = True | otherwise = False where (r,g,b) = colorToRGB c The function is equivalent to the previous version, but performs its calculation slightly differently. It takes a color, c, and applies colorToRGB to it, yielding an RGB triple which is matched (using pattern matching!) against (r,g,b). This match succeeds and the values r, g and b are bound to their respective values. The first guard checks to see if r is 255 and, if so, returns true. The second and third guard check g and b against 255, respectively and return true if they match. The last guard fires as a last resort and returns False. Instance Declarations In order to declare a type to be an instance of a class, you need to provide an instance declaration for it. Most classes provide what's called a "minimal complete definition." This means the functions which must be implemented for this class in order for its definition to be satisfied. Once you've written these functions for your type, you can declare it an instance of the class. The Eq class has two members (i.e., two functions): (==) :: Eq a => a -> a -> Bool (/=) :: Eq a => a -> a -> Bool The first of these type signatures reads that the function == is a function which takes two as which are members of Eq and produces a Bool. The type signature of /= (not equal) is identical. A minimal complete definition for the Eq class requires that either one of these functions be defined (if you define ==, then /= is defined automatically by negating the result of ==, and vice versa). These declarations must be provided inside the instance declaration. This is best demonstrated by example. Suppose we have our color example, repeated here for convenience: data Color = Red | Orange | Yellow | Green | Blue | Purple | White | Black | Custom Int Int Int -- R G B components We can define Color to be an instance of Eq by the following declaration: instance Eq Color where Red == Red = True Orange == Orange = True Yellow == Yellow = True Green == Green = True Blue == Blue = True Purple == Purple = True White == White = True Black == Black = True (Custom r g b) == (Custom r' g' b') = r == r' && g == g' && b == b' _ == _ = False The first line here begins with the keyword instance telling the compiler that we're making an instance declaration. It then specifies the class, Eq, and the type, Color which is going to be an instance of this class. Following that, there's the where keyword. Finally there's the method declaration. The first eight lines of the method declaration are basically identical. The first one, for instance, says that the value of the expression Red == Red is equal to True. Lines two through eight are identical. The declaration for custom colors is a bit different. We pattern match Custom on both sides of ==. On the left hand side, we bind r, g and b to the components, respectively. On the right hand side, we bind r', g' and b' to the components. We then say that these two custom colors are equal precisely when r == r', g == g' and b == b' are all equal. The fallthrough says that any pair we haven't previously declared as equal are unequal. The Show class is used to display arbitrary values as strings. This class has three methods: show :: Show a => a -> String showsPrec :: Show a => Int -> a -> String -> String showList :: Show a => [a] -> String -> String A minimal complete definition is either show or showsPrec (we will talk about showsPrec later -- it's in there for efficiency reasons). We can define our Color datatype to be an instance of Show with the following instance declaration: instance Show Color where show Red = "Red" show Orange = "Orange" show Yellow = "Yellow" show Green = "Green" show Blue = "Blue" show Purple = "Purple" show White = "White" show Black = "Black" show (Custom r g b) = "Custom " ++ show r ++ " " ++ show g ++ " " ++ show b This declaration specifies exactly how to convert values of type Color to Strings. Again, the first eight lines are identical and simply take a Color and produce a string. The last line for handling custom colors matches out the RGB components and creates a string by concatenating the result of showing the components individually (with spaces in between and "Custom" at the beginning). Other Important Classes There are a few other important classes which I will mention briefly because either they are commonly used or because we will be using them shortly. I won't provide example instance declarations; how you can do this should be clear by now. The ordering class, the functions are: compare :: Ord a => a -> a -> Ordering (<=) :: Ord a => a -> a -> Bool (>) :: Ord a => a -> a -> Bool (>=) :: Ord a => a -> a -> Bool (<) :: Ord a => a -> a -> Bool min :: Ord a => a -> a -> a max :: Ord a => a -> a -> a Almost any of the functions alone is a minimal complete definition; it is recommended that you implement compare if you implement only one, though. This function returns a value of type Ordering which is defined as: data Ordering = LT | EQ | GT So, for instance, we get: Prelude> compare 5 7 Prelude> compare 6 6 Prelude> compare 7 5 In order to declare a type to be an instance of Ord you must already have declared it an instance of Eq (in other words, Ord is a subclass of Eq -- more about this in the section on Classes). The Enum class is for enumerated types; that is, for types where each element has a successor and a predecessor. Its methods are: pred :: Enum a => a -> a succ :: Enum a => a -> a toEnum :: Enum a => Int -> a fromEnum :: Enum a => a -> Int enumFrom :: Enum a => a -> [a] enumFromThen :: Enum a => a -> a -> [a] enumFromTo :: Enum a => a -> a -> [a] enumFromThenTo :: Enum a => a -> a -> a -> [a] The minimal complete definition contains both toEnum and fromEnum, which converts from and to Ints. The pred and succ functions give the predecessor and successor, respectively. The enum functions enumerate lists of elements. For instance, enumFrom x lists all elements after x; enumFromThen x step lists all elements starting at x in steps of size step. The To functions end the enumeration at the given element. The Num class provides the standard arithmetic operations: (-) :: Num a => a -> a -> a (*) :: Num a => a -> a -> a (+) :: Num a => a -> a -> a negate :: Num a => a -> a signum :: Num a => a -> a abs :: Num a => a -> a fromInteger :: Num a => Integer -> a All of these are obvious except for perhaps negate which is the unary minus. That is, negate x means ${\displaystyle -x}$ . The Read class is the opposite of the Show class. It is a way to take a string and read in from it a value of arbitrary type. The methods for Read are: readsPrec :: Read a => Int -> String -> [(a, String)] readList :: String -> [([a], String)] The minimal complete definition is readsPrec. The most important function related to this is read, which uses readsPrec as: read s = fst (head (readsPrec 0 s)) This will fail if parsing the string fails. You could define a maybeRead function as: maybeRead s = case readsPrec 0 s of [(a,_)] -> Just a _ -> Nothing How to write and use readsPrec directly will be discussed further in the examples. Suppose we are defining the Maybe datatype from scratch. The definition would be something like: data Maybe a = Nothing | Just a Now, when we go to write the instance declarations, for, say, Eq, we need to know that a is an instance of Eq otherwise we can't write a declaration. We express this as: instance Eq a => Eq (Maybe a) where Nothing == Nothing = True (Just x) == (Just x') = x == x' This first line can be read "That a is an instance of Eq implies (=>) that Maybe a is an instance of Eq." Writing obvious Eq, Ord, Read and Show classes like these is tedious and should be automated. Luckily for us, it is. If you write a datatype that's "simple enough" (almost any datatype you'll write unless you start writing fixed point types), the compiler can automatically derive some of the most basic classes. To do this, you simply add a deriving clause to after the datatype declaration, as data Color = Red | ... | Custom Int Int Int -- R G B components deriving (Eq, Ord, Show, Read) This will automatically create instances of the Color datatype of the named classes. Similarly, the declaration: data Maybe a = Nothing | Just a deriving (Eq, Ord, Show, Read) derives these classes just when a is appropriate. All in all, you are allowed to derive instances of Eq, Ord, Enum, Bounded, Show and Read. There is considerable work in the area of "polytypic programming" or "generic programming" which, among other things, would allow for instance declarations for any class to be derived. This is much beyond the scope of this tutorial; instead, I refer you to the literature. I know by this point you're probably terribly tired of hearing about datatypes. They are, however, incredibly important, otherwise I wouldn't devote so much time to them. Datatypes offer a sort of notational convenience if you have, for instance, a datatype that holds many many values. These are called named fields. Consider a datatype whose purpose is to hold configuration settings. Usually when you extract members from this type, you really only care about one or possibly two of the many settings. Moreover, if many of the settings have the same type, you might often find yourself wondering "wait, was this the fourth or fifth element?" One thing you could do would be to write accessor functions. Consider the following made-up configuration type for a terminal program: data Configuration = Configuration String -- user name String -- local host String -- remote host Bool -- is guest? Bool -- is super user? String -- current directory String -- home directory Integer -- time connected deriving (Eq, Show) You could then write accessor functions, like (I've only listed a few): getUserName (Configuration un _ _ _ _ _ _ _) = un getLocalHost (Configuration _ lh _ _ _ _ _ _) = lh getRemoteHost (Configuration _ _ rh _ _ _ _ _) = rh getIsGuest (Configuration _ _ _ ig _ _ _ _) = ig You could also write update functions to update a single element. Of course, now if you add an element to the configuration, or remove one, all of these functions now have to take a different number of arguments. This is highly annoying and is an easy place for bugs to slip in. However, there's a solution. We simply give names to the fields in the datatype declaration, as follows: data Configuration = Configuration { username :: String, localhost :: String, remotehost :: String, isguest :: Bool, issuperuser :: Bool, currentdir :: String, homedir :: String, timeconnected :: Integer This will automatically generate the following accessor functions for us: username :: Configuration -> String localhost :: Configuration -> String Moreover, it gives us very convenient update methods. Here is a short example for a "post working directory" and "change directory" like functions that work on Configurations: changeDir :: Configuration -> String -> Configuration changeDir cfg newDir = -- make sure the directory exists if directoryExists newDir then -- change our current directory cfg{currentdir = newDir} else error "directory does not exist" postWorkingDir :: Configuration -> String -- retrieve our current directory postWorkingDir cfg = currentdir cfg So, in general, to update the field x in a datatype y to z, you write y{x=z}. You can change more than one; each should be separated by commas, for instance, y{x=z, a=b, c=d}. You can of course continue to pattern match against Configurations as you did before. The named fields are simply syntactic sugar; you can still write something like: getUserName (Configuration un _ _ _ _ _ _ _) = un But there is little reason to. Finally, you can pattern match against named fields as in: getHostData (Configuration {localhost=lh,remotehost=rh}) = (lh,rh) This matches the variable lh against the localhost field on the Configuration and the variable rh against the remotehost field on the Configuration. These matches of course succeed. You could also constrain the matches by putting values instead of variable names in these positions, as you would for standard datatypes. You can create values of Configuration in the old way as shown in the first definition below, or in the named-field's type, as shown in the second definition below: initCFG = Configuration "nobody" "nowhere" "nowhere" False False "/" "/" 0 initCFG' = { username="nobody", timeconnected=0 } Though the second is probably much more understandable unless you litter your code with comments. to do: put something here Standard List Functions Recall that the definition of the built-in Haskell list datatype is equivalent to: data List a = Nil | Cons a (List a) With the exception that Nil is called [] and Cons x xs is called x:xs. This is simply to make pattern matching easier and code smaller. Let's investigate how some of the standard list functions may be written. Consider map. A definition is given below: map _ [] = [] map f (x:xs) = f x : map f xs Here, the first line says that when you map across an empty list, no matter what the function is, you get an empty list back. The second line says that when you map across a list with x as the head and xs as the tail, the result is f applied to x consed onto the result of mapping f on xs. The filter can be defined similarly: filter _ [] = [] filter p (x:xs) | p x = x : filter p xs | otherwise = filter p xs How this works should be clear. For an empty list, we return an empty list. For a non empty list, we return the filter of the tail, perhaps with the head on the front, depending on whether it satisfies the predicate p or not. We can define foldr as: foldr _ z [] = z foldr f z (x:xs) = f x (foldr f z xs) Here, the best interpretation is that we are replacing the empty list ([]) with a particular value and the list constructor (:) with some function. On the first line, we can see the replacement of [] for z. Using backquotes to make f infix, we can write the second line as: foldr f z (x:xs) = x `f` (foldr f z xs) From this, we can directly see how : is being replaced by f. Finally, foldl: foldl _ z [] = z foldl f z (x:xs) = foldl f (f z x) xs This is slightly more complicated. Remember, z can be thought of as the current state. So if we're folding across a list which is empty, we simply return the current state. On the other hand, if the list is not empty, it's of the form x:xs. In this case, we get a new state by applying f to the current state z and the current list element x and then recursively call foldl on xs with this new There is another class of functions: the zip and unzip functions, which respectively take multiple lists and make one or take one lists and split them apart. For instance, zip does the following: Prelude> zip "hello" [1,2,3,4,5] Basically, it pairs the first elements of both lists and makes that the first element of the new list. It then pairs the second elements of both lists and makes that the second element, etc. What if the lists have unequal length? It simply stops when the shorter one stops. A reasonable definition for zip is: zip [] _ = [] zip _ [] = [] zip (x:xs) (y:ys) = (x,y) : zip xs ys The unzip function does the opposite. It takes a zipped list and returns the two "original" lists: Prelude> unzip [('f',1),('o',2),('o',3)] There are a whole slew of zip and unzip functions, named zip3, unzip3, zip4, unzip4 and so on; the ...3 functions use triples instead of pairs; the ...4 functions use 4-tuples, etc. Finally, the function take takes an integer ${\displaystyle n}$ and a list and returns the first ${\displaystyle n}$ elements off the list. Correspondingly, drop takes an integer ${\displaystyle n}$ and a list and returns the result of throwing away the first ${\displaystyle n}$ elements off the list. Neither of these functions produces an error; if ${\displaystyle n}$ is too large, they both will just return shorter lists. There is some syntactic sugar for dealing with lists whose elements are members of the Enum class (see the section on Instances), such as Int or Char. If we want to create a list of all the elements from ${\displaystyle 1}$ to ${\displaystyle 10}$ , we can simply write: Prelude> [1..10] We can also introduce an amount to step by: Prelude> [1,3..10] Prelude> [1,4..10] These expressions are short hand for enumFromTo and enumFromThenTo, respectively. Of course, you don't need to specify an upper bound. Try the following (but be ready to hit Control+C to stop the Prelude> [1..] Probably yours printed a few thousand more elements than this. As we said before, Haskell is lazy. That means that a list of all numbers from 1 on is perfectly well formed and that's exactly what this list is. Of course, if you attempt to print the list (which we're implicitly doing by typing it in the interpreter), it won't halt. But if we only evaluate an initial segment of this list, we're Prelude> take 3 [1..] Prelude> take 3 (drop 5 [1..]) This comes in useful if, say, we want to assign an ID to each element in a list. Without laziness we'd have to write something like this: assignID :: [a] -> [(a,Int)] assignID l = zip l [1..length l] Which means that the list will be traversed twice. However, because of laziness, we can simply write: assignID l = zip l [1..] And we'll get exactly what we want. We can see that this works: Prelude> assignID "hello" Finally, there is some useful syntactic sugar for map and filter, based on standard set-notation in mathematics. In math, we would write something like ${\displaystyle \{f(x)|x\in s\land p(x)\}}$ to mean the set of all values of ${\displaystyle f}$ when applied to elements of ${\displaystyle s}$ which satisfy ${\displaystyle p}$ . This is equivalent to the Haskell statement map f (filter p s). However, we can also use more math-like notation and write [f x | x <- s, p x]. While in math the ordering of the statements on the side after the pipe is free, it is not so in Haskell. We could not have put p x before x <- s otherwise the compiler wouldn't know yet what x was. We can use this to do simple string processing. Suppose we want to take a string, keep only the uppercase letters and convert those to lowercase. We could do this in either of the following two equivalent ways: Prelude> map toLower (filter isUpper "Hello World") Prelude> [toLower x | x <- "Hello World", isUpper x] These two are equivalent, and, depending on the exact functions you're using, one might be more readable than the other. There's more you can do here, though. Suppose you want to create a list of pairs, one for each point between (0,0) and (5,7) below the diagonal. Doing this manually with lists and maps would be cumbersome and possibly difficult to read. It couldn't be easier than with list Prelude> [(x,y) | x <- [1..5], y <- [x..7]] If you reverse the order of the x <- and y <- clauses, the order in which the space is traversed will be reversed (of course, in that case, y could no longer depend on x and you would need to make x depend on y but this is trivial). Lists are nice for many things. It is easy to add elements to the beginning of them and to manipulate them in various ways that change the length of the list. However, they are bad for random access, having average complexity ${\displaystyle {\mathcal {O}}(n)}$ to access an arbitrary element (if you don't know what ${\displaystyle {\mathcal {O}}(...)}$ means, you can either ignore it or take a quick detour and read the appendix chapter Complexity, a two-page introduction to complexity theory). So, if you're willing to give up fast insertion and deletion because you need random access, you should use arrays instead of lists. In order to use arrays you must import the Array module. There are a few methods for creating arrays, the array function, the listArray function, and the accumArray function. The array function takes a pair which is the bounds of the array, and an association list which specifies the initial values of the array. The listArray function takes bounds and then simply a list of values. Finally, the accumArray function takes an accumulation function, an initial value and an association list and accumulates pairs from the list into the array. Here are some examples of arrays being created: Prelude> :m Array Prelude Array> array (1,5) [(i,2*i) | i <- [1..5]] array (1,5) [(1,2),(2,4),(3,6),(4,8),(5,10)] Prelude Array> listArray (1,5) [3,7,5,1,10] array (1,5) [(1,3),(2,7),(3,5),(4,1),(5,10)] Prelude Array> accumArray (+) 2 (1,5) [(i,i) | i <- [1..5]] array (1,5) [(1,3),(2,4),(3,5),(4,6),(5,7)] When arrays are printed out (via the show function), they are printed with an association list. For instance, in the first example, the association list says that the value of the array at ${\ displaystyle 1}$ is ${\displaystyle 2}$ , the value of the array at ${\displaystyle 2}$ is ${\displaystyle 4}$ , and so on. You can extract an element of an array using the ! function, which takes an array and an index, as in: Prelude Array> (listArray (1,5) [3,7,5,1,10]) ! 3 Moreover, you can update elements in the array using the // function. This takes an array and an association list and updates the positions specified in the list: Prelude Array> (listArray (1,5) [3,7,5,1,10]) // [(2,99),(3,-99)] array (1,5) [(1,3),(2,99),(3,-99),(4,1),(5,10)] There are a few other functions which are of interest: │bounds │returns the bounds of an array │ │indices│returns a list of all indices of the array │ │elems │returns a list of all the values in the array in order │ │assocs │returns an association list for the array │ If we define arr to be listArray (1,5) [3,7,5,1,10], the result of these functions applied to arr are: Prelude Array> bounds arr Prelude Array> indices arr Prelude Array> elems arr Prelude Array> assocs arr Note that while arrays are ${\displaystyle {\mathcal {O}}(1)}$ access, they are not ${\displaystyle {\mathcal {O}}(1)}$ update. They are in fact ${\displaystyle {\mathcal {O}}(n)}$ update, since in order to maintain purity, the array must be copied in order to make an update. Thus, functional arrays are pretty much only useful when you're filling them up once and then only reading. If you need fast access and update, you should probably use FiniteMaps, which are discussed in the section on Finitemaps and have ${\displaystyle {\mathcal {O}}(\log n)}$ access and update. This section is work in progress, please help improving it! The Map datatype from the Data.Map module is a purely functional implementation of balanced trees. Maps can be compared to lists and arrays in terms of the time it takes to perform various operations on those datatypes of a fixed size, ${\displaystyle n}$ . A brief comparison is: │ │ List │ Array │ Map │ │insert│${\displaystyle {\mathcal {O}}(1)}$│${\displaystyle {\mathcal {O}}(n)}$│${\displaystyle {\mathcal {O}}(\log n)}$ │ │update│${\displaystyle {\mathcal {O}}(n)}$│${\displaystyle {\mathcal {O}}(n)}$│${\displaystyle {\mathcal {O}}(\log n)}$ │ │delete│${\displaystyle {\mathcal {O}}(n)}$│${\displaystyle {\mathcal {O}}(n)}$│${\displaystyle {\mathcal {O}}(\log n)}$ │ │find │${\displaystyle {\mathcal {O}}(n)}$│${\displaystyle {\mathcal {O}}(1)}$│${\displaystyle {\mathcal {O}}(\log n)}$ │ │map │${\displaystyle {\mathcal {O}}(n)}$│${\displaystyle {\mathcal {O}}(n)}$│${\displaystyle {\mathcal {O}}(n)}$ │ As we can see, lists provide fast insertion (but slow everything else), arrays provide fast lookup (but slow everything else) and maps provide moderately fast everything. The type of a map is for the form Map k a where k is the type of the keys and a is the type of the elements. That is, maps are lookup tables from type k to type a. The basic map functions are: empty :: Map k a insert :: k -> a -> Map k a -> Map k a delete :: k -> Map k a -> Map k a member :: k -> Map k a -> Bool lookup :: k -> Map k a -> a In all these cases, the type k must be an instance of Ord (and hence also an instance of Eq). There are also function fromList and toList to convert lists to and from maps. Try the following: Prelude> :m Data.Map Prelude Data.Map> let mymap = fromList [('a',5),('b',10),('c',1),('d',2)] Prelude Data.Map> let othermap = insert 'e' 6 mymap Prelude Data.Map> toList mymap Prelude Data.Map> toList othermap Prelude Data.Map> Data.Map.lookup 'e' othermap Prelude Data.Map> Data.Map.lookup 'e' mymap *** Exception: user error (Data.Map.lookup: Key not found) The Final Word on Lists You are likely tired of hearing about lists at this point, but they are so fundamental to Haskell (and really all of functional programming) that it would be terrible not to talk about them some It turns out that foldr is actually quite a powerful function: it can compute a primitive recursive function. A primitive recursive function is essentially one which can be calculated using only "for" loops, but not "while" loops. In fact, we can fairly easily define map in terms of foldr: map2 f = foldr (\a b -> f a : b) [] Here, b is the accumulator (i.e., the result list) and a is the element being currently considered. In fact, we can simplify this definition through a sequence of steps: foldr (\a b -> f a : b) [] ==> foldr (\a b -> (:) (f a) b) [] ==> foldr (\a -> (:) (f a)) [] ==> foldr (\a -> ((:) . f) a) [] ==> foldr ((:) . f) [] This is directly related to the fact that foldr (:) [] is the identity function on lists. This is because, as mentioned before, foldr f z can be thought of as replacing the [] in lists by z and the : by f. In this case, we're keeping both the same, so it is the identity function. In fact, you can convert any function of the following style into a foldr: myfunc [] = z myfunc (x:xs) = f x (myfunc xs) By writing the last line with f in infix form, this should be obvious: myfunc [] = z myfunc (x:xs) = x `f` (myfunc xs) Clearly, we are just replacing [] with z and : with f. Consider the filter function: filter p [] = [] filter p (x:xs) = if p x then x : filter p xs else filter p xs This function also follows the form above. Based on the first line, we can figure out that z is supposed to be [], just like in the map case. Now, suppose that we call the result of calling filter p xs simply b, then we can rewrite this as: filter p [] = [] filter p (x:xs) = if p x then x : b else b Given this, we can transform filter into a fold: filter p = foldr (\a b -> if p a then a:b else b) [] Let's consider a slightly more complicated function: ++. The definition for ++ is: (++) [] ys = ys (++) (x:xs) ys = x : (xs ++ ys) Now, the question is whether we can write this in fold notation. First, we can apply eta reduction to the first line to give: (++) [] = id Through a sequence of steps, we can also eta-reduce the second line: (++) (x:xs) ys = x : ((++) xs ys) ==> (++) (x:xs) ys = (x:) ((++) xs ys) ==> (++) (x:xs) ys = ((x:) . (++) xs) ys ==> (++) (x:xs) = (x:) . (++) xs Thus, we get that an eta-reduced definition of ++ is: (++) [] = id (++) (x:xs) = (x:) . (++) xs Now, we can try to put this into fold notation. First, we notice that the base case converts [] into id. Now, if we assume (++) xs is called b and x is called a, we can get the following definition in terms of foldr: (++) = foldr (\a b -> (a:) . b) id This actually makes sense intuitively. If we only think about applying ++ to one argument, we can think of it as a function which takes a list and creates a function which, when applied, will prepend this list to another list. In the lambda function, we assume we have a function b which will do this for the rest of the list and we need to create a function which will do this for b as well as the single element a. In order to do this, we first apply b and then further add a to the front. We can further reduce this expression to a point-free style through the following sequence: ==> (++) = foldr (\a b -> (a:) . b) id ==> (++) = foldr (\a b -> (.) (a:) b) id ==> (++) = foldr (\a -> (.) (a:)) id ==> (++) = foldr (\a -> (.) ((:) a)) id ==> (++) = foldr (\a -> ((.) . (:)) a) id ==> (++) = foldr ((.) . (:)) id This final version is point free, though not necessarily understandable. Presumbably the original version is clearer. As a final example, consider concat. We can write this as: concat [] = [] concat (x:xs) = x ++ concat xs It should be immediately clear that the z element for the fold is [] and that the recursive function is ++, yielding: concat = foldr (++) [] 1. The function and takes a list of booleans and returns True if and only if all of them are True. It also returns True on the empty list. Write this function in terms of foldr. 2. The function concatMap behaves such that concatMap f is the same as concat . map f. Write this function in terms of foldr.
{"url":"https://en.m.wikibooks.org/wiki/Haskell/YAHT/Language_advanced","timestamp":"2024-11-02T15:28:49Z","content_type":"text/html","content_length":"163863","record_id":"<urn:uuid:10d3ea71-7576-4004-9b3a-26737f4a21d9>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00303.warc.gz"}
wound surface area calculator Wound Surface Area Calculator Calculating the surface area of a wound is crucial in healthcare for determining treatment plans and assessing healing progress. Traditionally, this calculation involves complex formulas, but with the advent of digital tools, such as online calculators, the process has become more accessible and efficient. How to Use: To utilize the wound surface area calculator, simply input the required parameters into the designated fields, and with the click of a button, the calculator will provide you with the accurate surface area measurement of the wound. The most accurate formula for calculating wound surface area depends on the shape of the wound. For irregular shapes, the wound area can be estimated using the formula for the area of an ellipse: • A is the surface area of the wound. • a and 𝑏b are the lengths of the two principal axes of the ellipse. Example Solve: Suppose we have a wound with major axis 𝑎=5 cm and minor axis 𝑏=3 cm. Plugging these values into the formula, we get: Q: Can this calculator be used for wounds of any shape? A: Yes, the calculator employs a formula suitable for estimating the surface area of irregularly shaped wounds, such as ellipses. Q: How accurate are the results provided by this calculator? A: The calculator uses the precise mathematical formula for calculating wound surface area, ensuring accurate results. Q: Is this calculator suitable for professional medical use? A: While this calculator provides accurate estimations, it’s always recommended to consult healthcare professionals for comprehensive wound assessment. The wound surface area calculator offers a convenient and reliable solution for estimating the surface area of wounds, aiding healthcare professionals in treatment planning and monitoring healing
{"url":"https://calculatordoc.com/wound-surface-area-calculator/","timestamp":"2024-11-12T05:59:13Z","content_type":"text/html","content_length":"91486","record_id":"<urn:uuid:c36d06a7-92df-4b44-bfcd-aa0b45a29061>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00029.warc.gz"}
Normal Form Normal Forms From Scholarpedia James Murdock (2006), Scholarpedia, 1(10):1902. doi:10.4249/scholarpedia.1902 revision #91592 [link to/cite this article] (Redirected from Normal form A normal form of a mathematical object, broadly speaking, is a simplified form of the object obtained by applying a transformation (often a change of coordinates) that is considered to preserve the essential features of the object. For instance, a matrix can be brought into Jordan normal form by applying a similarity transformation. This article focuses on normal forms for autonomous systems of differential equations (vector fields or flows) near an equilibrium point. Similar ideas can be used for discrete-time dynamical systems (diffeomorphisms) near a fixed point, or for flows near a periodic orbit. Basic Definitions The starting point is a smooth system of differential equations with an equilibrium (rest point) at the origin, expanded as a power series \[ \dot x = Ax + a_1(x) + a_2(x) +\cdots, \] where \(x\in{\ mathbb R}^n\) or \({\mathbb C}^n\ ,\) \(A\) is an \(n\times n\) real or complex matrix, and \(a_j(x)\) is a homogeneous polynomial of degree \(j+1\) (for instance, \(a_1(x)\) is quadratic). The expansion is taken to some finite order \(k\) and truncated there, or else is taken to infinity but is treated formally (the convergence or divergence of the series is ignored). The purpose is to obtain an approximation to the (unknown) solution of the original system, that will be valid over an extended range in time. The linear term \(Ax\) is assumed to be already in the desired normal form, usually the Jordan or a real canonical form. A transformation to new variables \(y\) is applied, having the form \[ x=y+u_1(y)+u_2(y)+\cdots, \] where \(u_j\) is homogeneous of degree \(j+1\ . \) This results in a new system \[ \dot y = Ay + b_1(y) + b_2(y) +\cdots, \] having the same general form as the original system. The goal is to make a careful choice of the \(u_j\ ,\) so that the \ (b_j\) are "simpler" in some sense than the \(a_j\ .\) "Simpler" may mean only that some terms have been eliminated, but in the best cases one hopes to achieve a system that has additional symmetries that were not present in the original system. (If the normal form possesses a symmetry to all orders, then the original system had a hidden approximate symmetry with transcendentally small error.) Among many historical references in the development of normal form theory, two significant ones are Birkhoff (1996) and Bruno (1989). As the Birkhoff reference shows, the early stages of the theory were confined to Hamiltonian systems, and the normalizing transformations were canonical (now called symplectic). The Bruno reference treats in detail the convergence and divergence of normalizing An Example A basic example is the nonlinear oscillator with \(n=2\) and \[ A=\left[ \begin{matrix} 0 & -1 \\ 1 & 0 \end{matrix} \right]. \] In this case it is possible (no matter what the original \(a_j\) may be) to achieve \(b_j=0\) for \(j\) odd and to eliminate all but two coefficients from each \(b_j\) with \(j\) even. More precisely, writing \(r^2=y_1^2+y_2^2\ ,\) a normal form in this case is \[ \ dot y = Ay + \sum_{i=1}^{\infty} \alpha_ir^{2i}y + \beta_ir^{2i}Ay \ .\] In polar coordinates this becomes \[ \dot r = \alpha_1 r^3 + \alpha_2r^5+\cdots \] \[ \dot\theta = 1 + \beta_1r^2 + \beta_2r^ 4+\cdots \ .\] The first nonzero \(\alpha_i\) determines the stability of the origin, and the \(\beta_i\) control the dependence of frequency on amplitude. Also the normalized system has achieved symmetry (more technically, equivariance) under rotation about the origin. Although the classical (or level-one) approach to normal forms stops with the form obtained above for this example, it is important to note that neither the coefficients \(\alpha_i\) and \(\beta_i\) in the equation, nor the transformation terms \(u_j\) used to achieve the equation, are uniquely determined by the original \(a_j\ .\) In fact, by a more careful choice of the \(u_j\ ,\) it is possible to put the nonlinear oscillator into a hypernormal form (also called a unique, higher-level, or simplest normal form) in which all but finitely many of the coefficients \(\alpha_i\) and \( \beta_i\) are zero. Hypernormal forms are difficult to calculate, and from here on we speak only of classical normal Asymptotic Consequences of Normal Forms For some systems, the normal form (truncated at a given degree) is simple enough to become solvable. In this case it is of interest to ask whether this solution gives rise to a good approximation (an asymptotic approximation in some specific sense) to a solution of the original equation (say, with the same initial condition). The answer is "sometimes yes". ("Gives rise to" means that the solution of the truncated normal form usually must be fed back through the transformation to normal form.) Some popular books, such as Nayfeh (1993), present the subject entirely from this point of view, without proving any error estimates or noticing that there are cases in which asymptotic validity cannot hold. Several theorems and open questions in this regard are given in chapter 5 of Murdock (2003). The most basic theorem states that an asymptotic error estimate with respect to a small parameter holds if (a) the parameter is introduced correctly, (b) the matrix of the linear term is semisimple (see below) and has all its eigenvalues on the imaginary axis, and (c) the semisimple normal form style (see below) is used. Although the asymptotic use of normal forms is important when it is true, and has many practical applications, the primary importance of normal forms is as a preparatory step towards the study of qualitative dynamics, unfoldings, and bifurcations. Geometrical Consequences of the Normal Form It has already been pointed out that a normal form can decide stability questions and establish hidden symmetries. Computing the normal form up to degree \(k\) also automatically computes (to degree \(k\)) the stable, unstable, and center manifolds, the center manifold reduction, and the fibration of the center-stable and center-unstable manifolds over the center manifold. The common practice of computing the center manifold reduction first, and then computing the normal form only for this reduced system, seems to save work but loses many of these results. See chapter 5 of Murdock (2003). On occasion, the truncation of a normal form produces a simple system that is topologically equivalent to the original system in a neighborhood of the equilibrium, called topological normal form. For instance, in the example above, truncating after the first nonvanishing \(\alpha_i\) will accomplish this, but if all \(\alpha_i\) are zero, the topological behavior is probably determined by a transcendentally small effect that is not captured by the normal form. Normal forms are important for determining bifurcations of a system, but this requires the inclusion of unfolding parameters. The Homological Equation and Normal Form Styles In the general case, we define the Lie derivative operator \(L_A\) associated with the matrix \(A\) by \((L_A v)(x)=v'(x)Ax-Av(x)\ ,\) where \(v\) is a vector field and \(v'\) is its matrix of partial derivatives. Then \(L_A\) maps the vector space \(\mathcal{V}_j\) of homogeneous vector fields of degree \(j+1\) into itself. The relation between the \(a_j\ ,\) \(b_j\ ,\) and \(u_j\) is determined recursively by the homological equations \[ L_A u_j = K_j - b_j \ ,\] where \(K_1=a_1\) and \(K_j\) equals \(a_j\) plus a correction term computed from \(a_1,\dots,a_{j-1}\) and \(u_1,\ dots,u_{j-1}\ .\) Let \(\mathcal{N}_j\) be any choice of a complementary subspace to the image of \(L_A\) in \(\mathcal{V}_j\ ;\) then it is possible to choose the \(u_j\) so that each \(b_j\in \ mathcal{N}_j\ .\) (Take \(b_j=P_j K_j\ ,\) where \(P_j:\mathcal{V}_j\rightarrow\mathcal{N}_j\) is the projection map, and note that the homological equation can be solved, nonuniquely, for \(u_j\ . \)) The choice of \(\mathcal{N}_j\) is called a normal form style, and represents the preference of the user as to what is considered "simple". The purpose of this procedure is to ensure that the higher-order correction terms, \(u_j\ ,\) are bounded, so that the approximation to the solution, \(x(t)\ ,\) is valid over an extended range in time. The Semisimple Case; Resonant Monomials The theory breaks into two cases according to whether \(A\) is semisimple (diagonalizable) or not. The semisimple case, illustrated by the nonlinear oscillator above, is the easiest, and there is only one useful style (in which \(\mathcal{N}_j\) is the kernel of \(L_A\)), ultimately due to Poincaré. It is easy to describe the semisimple normal form if \(A\) is diagonal with diagonal entries \ (\lambda_1,\dots,\lambda_n\) (which usually requires introducing complex variables with reality conditions): The \(r\)th equation (for \(\dot y_r\)) of the normalized system will contain only monomials \(y_1^{m_1}\cdots y_n^{m_n}\) satisfying \[ m_1\lambda_1+\cdots+m_n\lambda_n-\lambda_r=0 \ .\] Such monomials are called resonant because for pure imaginary eigenvalues, this equation becomes a resonance among frequencies in the usual sense. An elementary treatment of normal forms in the semisimple case only is by Kahn and Zarmi (1998). The Nonsemisimple Case In the nonsemisimple case there are two important styles, the inner product normal form, originally due to Belitskii but popularized by Elphick et al. (1987), and the sl(2) normal form due to Cushman and Sanders. In the inner product style, \(\mathcal{N}_j\) is the kernel of \(L_{A^*}\ ,\) \(A^*\) being the adjoint or conjugate transpose of \(A\ .\) In the sl(2) style, \(\mathcal{N}_j\) is the kernel of an operator defined from \(A\) using the theory of the Lie algebra sl(2). The inner product style is more popular at this time, but the sl(2) style has a much richer mathematical structure with deep connections to sl(2) representation theory and to the classical invariant theory of Cayley, Sylvester and others. Because of this the sl(2) style has computational algorithms that are not available for the inner product style. There is also a simplified normal form style that is derived from the inner product style by changing the projection. A modern introduction to normal form theory, containing all the styles mentioned here with references and historical remarks, may be found in the monograph by Murdock (2003). Some more recent developments are contained in the last few chapters of Sanders, Verhulst, and Murdock (2007). • Poincaré, H., New Methods of Celestial Mechanics (Am. Inst. of Physics, 1993). • Birkhoff, G.D., Dynamical Systems (Am. Math. Society, Providence, 1996). • Arnold, V.I., Geometrical Methods in the Theory of Ordinary Differential Equations (Springer-Verlag, New York, 1988). • Bruno, A.D., Local Methods in Nonlinear Differential equations (Springer-Verlag, Berlin, 1989). • Elphick C., Tirapegui E., Brachet M.E., Coullet P., and Iooss G. A simple global characterization for normal forms of singular vector fields. Physica D, 29:95-127(1987). • Nayfeh, A.H., Method of Normal Forms. (Wiley, New York, 1993). • Kahn P.B. and Zarmi Y., Nonlinear Dynamics: Exploration through Normal Forms. (Wiley, New York, 1998). • Murdock J. Normal Forms and Unfoldings for Local Dynamical Systems. (Springer, New York, 2003). • Jan Sanders, Ferdinand Verhulst, and James Murdock, Averaging Methods in Nonlinear Dynamical Systems, Springer, New York, 2007, xxiii+431. Internal references External Links See Also Bifurcations, Dynamical Systems, Equilibria, Jordan Normal Form, Lie Algebra, Ordinary Differential Equations, Unfoldings
{"url":"http://var.scholarpedia.org/article/Normal_form","timestamp":"2024-11-11T20:43:15Z","content_type":"text/html","content_length":"45921","record_id":"<urn:uuid:900f597f-4749-46c1-843c-cdd7b74e8348>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00799.warc.gz"}
SYSTEMS VERIFICATION - 2020/1 Module Overview The course is an introduction in formal methods for system specification and verification. It will focus on logic-based formalisms and techniques, and specifically on model checking. The main logics taught will be temporal logics, which are mainstream in verification, especially analysis of hardware systems. Other logics and verification techniques (such as theorem proving) will be included to a smaller extended. Model checkers will be used in the labs, on different system-verification problems. Elements of building model checking tools will be presented and explored. Some elements of advanced verification techniques (e.g., abstraction) will be mentioned. Module provider Computer Science Module Leader BOUREANU Ioana (Computer Sci) Number of Credits: 15 ECTS Credits: 7.5 Framework: FHEQ Level 6 Module cap (Maximum number of students): N/A Overall student workload Independent Learning Hours: 110 Module Availability Semester 2 Prerequisites / Co-requisites Basic discrete maths (sets, functions, etc.) -- COM1026 Basic propositional calculus -- COM1026 Basic first-order logic -- COM1026 Basic C/C++ programming -- COM2040 Module content The content (topics of lectures, tutorials and labs) follow: 1. Introduction + Introduction to System Verification 2. Basic Modal Concepts Tutorial -- Modal specifications and satisfaction 3. The logics LTL and CTL Tutorial -- LTL and CTL 4. The logics LTL vs CTL cont'd (expressivity) + The logic of CTL* Tutorial 3 -- LTL and CTL (cont’d) 5. Advanced Temporal Specifications Examples Lab -- on the introduction to a model checker 6. System-Modelling Examples 7. Explicit Model Checking Tutorial -- Explicit model checking 8. Binary Decision Diagrams (BDDs) Tutorial -- Binary Decision Diagrams Lab -- on using/manipulating BDDs 9. Symbolic Model Checking Lab 3 -- on further, more advanced usage of a model checker 10. Non-classical Logics - Part I 11. Non-classical Logics - Part II 12. Revision week Assessment pattern Assessment type Unit of assessment Weighting Coursework Coursework 1 15 Coursework Coursework 2 15 Examination Final exam 70 Alternative Assessment Assessment Strategy The assessment strategy is designed to provide students with the opportunity to demonstrate that they have achieved the module learning outcomes. Thus, the summative assessment for this module consists of: · First individual coursework on temporal logic and explicit model checking. This addresses LO1 and LO2. · Second individual coursework on applied non-classical logics, symbolic model checking of temporal logics and/or other verification techniques for applied logics (e.g., program verification with Hoare logic) . This addresses LO1 and LO2. · A 2h unseen examination on the whole course content. This addresses LO1, LO2, LO3, LO4. The individual pieces of coursework will be due around week 5 and 10 respectively. There will be five labs and the best four submissions of the total five will be considered towards the lab component of the mark. The exam will take place at the end of the semester during the exam period. Formative assessment and feedback 1. PollEverywhere or other interactive polling methods will be used in the lectures with each lecture consisting of a number of slides explaining the theory followed by a number of slides gauging the students’ understanding. The answers are discussed when necessary, e.g., if a high proportion (more than 25%) of the students got the answer wrong. 2. Individual formative feedback will also be given during the lab sessions, and as part of the summative assessment. Assessment & Assessment Strategy: Executive Summary Coursework 1 (15%) -- take-home work, testing learning outcomes 1 and 2 Coursework 2 (15%) -- take-home work, testing learning outcomes 1 and 2 Final exam (70%) -- unseen written piece of examination in the exam period, testing learning outcomes 1, 2, 3 and 4 Module aims • Introducing formal methods for system specification and verification • Focus on logic-based techniques for system verification, particularly model checking • Give a flavour of advanced model checking techniques and of other verification methods, such as theorem proving Learning outcomes Attributes Developed 001 Understand the use of temporal logic and, to some extend, other logics as formal specification languages KCT 002 Understand and use verification algorithms such as the ones based on SAT (satisfiability of logic formulae) and/or using ordered-binary-decision diagrams KCPT 003 Learn how to use of a model checker and, potentially, a theorem prover to verify systems against formal specifications KCP 004 Appreciate the limitations of current techniques and develop a basic understanding of research directions in this space KC C - Cognitive/analytical K - Subject knowledge T - Transferable skills P - Professional/Practical skills Methods of Teaching / Learning The learning and teaching strategy is designed to develop a critical understanding of the foundations of systems verification, facilitating self-directed further studying in this field. The skills learned in this module will be transferable to other verification techniques, such as program analysis. Also developing critical thinking is at the core of this module. The learning and teaching methods include: • Twenty-four hours of lectures with class discussion • Ten hours of tutorials • Six hours of lab classes • Use of an online forum for facilitated discussion Indicated Lecture Hours (which may also include seminars, tutorials, workshops and other contact time) are approximate and may include in-class tests where one or more of these are an assessment on the module. In-class tests are scheduled/organised separately to taught content and will be published on to student personal timetables, where they apply to taken modules, as soon as they are finalised by central administration. This will usually be after the initial publication of the teaching timetable for the relevant semester. Other information 1. Michael Huth and Mark Ryan. 2004. Logic in Computer Science: Modelling and Reasoning about Systems. Cambridge University Press, New York, NY, USA. 2. Tobias Nipkow and Gerwin Klein. 2014. Concrete Semantics: With Isabelle/HOL. Springer Publishing Company. 3. Ronald Fagin, Joseph Y. Halpern, Yoram Moses, and Moshe Y. Vardi. 2003. Reasoning about Knowledge. MIT Press, Cambridge, MA, USA. Programmes this module appears in Programme Semester Classification Qualifying conditions Computer Science BSc (Hons) 2 Optional A weighted aggregate mark of 40% is required to pass the module Computing and Information Technology BSc (Hons) 2 Optional A weighted aggregate mark of 40% is required to pass the module Please note that the information detailed within this record is accurate at the time of publishing and may be subject to change. This record contains information for the most up to date version of the programme / module for the 2020/1 academic year.
{"url":"https://catalogue.surrey.ac.uk/2020-1/module/COM3028","timestamp":"2024-11-03T09:38:25Z","content_type":"text/html","content_length":"20671","record_id":"<urn:uuid:081daa50-989d-47f7-a2c1-b3ed16ebc49e>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00576.warc.gz"}
AP Stats Welcome to A.P. Statistics. In order to guarantee that we have sufficient time to review for the A.P. Exam, you are expected to review the statistics you learned in middle school. The topics: mean, median, mode, stem and leaf plots, dot plots, histogram and whisker box plots are covered in Chapter 1 of The Practice of Statistics. You can Google the topics to review them this summer. You are also expected to read, take notes and complete the assigned problems. You will have plenty of time to do the few problems I assigned from the textbook once you pick up your schedule and textbook before school starts. The key is to do the reading this summer! Read the Intro to Stats and Section 1-1 ( Distribution, Center, Spread, Shape, Outliers, Bar and Pie Charts, Dot Plots, Stem & Leaf Plots ).Take notes pp. xii-16 from The Practice of Statistics. Do exercises (1.1,1.3) this stands for chapter 1-problem 1 You are also expected to type a one-page summary (500 word minimum) of the summer reading book, Outliers, by Malcolm Gladwell. In the summary, include at least one paragraph on the vignette that you liked the most in addition to the general overview of the book. This summary is due Tuesday, September 7 which is the Tuesday after Labor Day. When you get a chance, check out the postings on my webpage at westernhigh.org. I will have several hotlinks for A.P. Statistics including the Student Information Sheet, AP Course Description, and the A.P. Statistics Chapter 1 Syllabus. On the first day of class, we will discuss in more detail what will be posted on this page Currently my webpage is undergoing some construction but what was posted last year should still visible. Again, I want to welcome you to A. P. Statistics, a math course that has applications in many disciplines and to the real world. Welcome to A.P. Statistics. In order to guarantee that we have sufficient time to review for the A.P. Exam, you are expected to review the statistics you learned in middle school. The topics: mean, median, mode, stem and leaf plots, dot plots, histogram and whisker box plots are covered in Chapter 1 of The Practice of Statistics. You can Google the topics to review them this summer. You are also expected to read, take notes and complete the assigned problems. You will have plenty of time to do the few problems I assigned from the textbook once you pick up your schedule and textbook before school starts. The key is to do the reading this summer! Read the Intro to Stats and Section 1-1 ( Distribution, Center, Spread, Shape, Outliers, Bar and Pie Charts, Dot Plots, Stem & Leaf Plots ).Take notes pp. xii-16 from The Practice of Statistics. Do exercises (1.1,1.3) this stands for chapter 1-problem 1 You are also expected to type a one-page summary (500 word minimum) of the summer reading book, Outliers, by Malcolm Gladwell. In the summary, include at least one paragraph on the vignette that you liked the most in addition to the general overview of the book. This summary is due Tuesday, September 7 which is the Tuesday after Labor Day. When you get a chance, check out the postings on my webpage at westernhigh.org. I will have several hotlinks for A.P. Statistics including the Student Information Sheet, AP Course Description, and the A.P. Statistics Chapter 1 Syllabus. On the first day of class, we will discuss in more detail what will be posted on this page. Currently my webpage is undergoing some construction but what was posted last year should still visible. Again, I want to welcome you to A. P. Statistics, a math course that has applications in many disciplines and to the real world.
{"url":"http://westernhigh.org/student-news/ap-stats","timestamp":"2024-11-13T18:02:16Z","content_type":"application/xhtml+xml","content_length":"19176","record_id":"<urn:uuid:e66e8ccd-86a3-406d-9d74-c261d2eed881>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00113.warc.gz"}
LEFT in Excel (Formula, Examples) | How to Use LEFT Function? What is the LEFT Formula in Excel? The LEFT Formula in Excel allows you to extract a substring from a string starting from the leftmost portion of it (that means from the start). It is an inbuilt Excel function specifically defined for string manipulations. Let’s suppose you have typed the word “Rose Watts” in a cell, and you are interested in extracting just the name “Rose” in another cell. With the LEFT formula, you can achieve this task easily by using the formula “=LEFT(A2, 4)“. In this formula, “A2” signifies the cell containing “Rose Watts” while “4” specifies that you want the first four characters. Table of Contents Now, let’s delve into a more detailed understanding of the LEFT function, starting with its syntax and arguments: =LEFT(text, num_chars) The LEFT formula’s syntax comprises two primary arguments: • text: This required argument represents the source cell or range from which you want to extract characters. • num_chars: This argument specifies the number of characters you want to extract from the beginning of the text string. If omitted, the formula extracts only one character. How to Use LEFT Formula in Excel? We have two methods to use the LEFT formula in Excel. A) Direct Cell Formula Approach 1. Start typing =LEFT( in the cell where you want to display the result 2. Now add the arguments, data and close the bracket like this: =LEFT(A1,2) 3. Press Enter, and the result will display in the same cell. B) Using Excel Ribbon • Select the cell where the result will appear (e.g., B2) • Go to the Formulas tab • Find Text in the categories • Click the LEFT function • A dialog box will open. Enter the cell (e.g., A2) in Text and the number of characters you want to extract (e.g.,4) in Num_chars • Click OK to insert the LEFT function • You can see the result in the bottom left of the dialog box. Example #1: Extracting a Single Character Let’s explore how to utilize the LEFT formula in Excel to extract the initial letter “D” of the word “Disneyland” present in Cell A1. 1. Select the cell where you want to display the result (e.g., B2). 2. Enter the formula =LEFT(A1) in Cell B2. 3. Click Enter. We get the result as D. Note: As previously mentioned, in case we provide no value for the “num_chars” parameter, the formula will extract just a single character. Example #2: Extracting More than One Character Suppose you have data in cell A1 containing the text “My name is Elsa”, and you need to extract the first 7 characters from this text. Let’s see how to do it using the LEFT formula in Excel: 1. Select the destination cell, i.e., B1 2. Enter the formula =LEFT(A1,7) 3. Press Enter. Note: The space between “My” and “name” is also considered a character. We get the result as “My name.” Example #3: Extracting Numbers Let us see how we can use the LEFt formula to extract the leftmost numbers from a cell. 1. Select Cell B1 2. Enter =LEFT(A1,2) as the formula 3. Press Enter to display the result. Example #4: Extracting Values for a Range of Cells Suppose you have a range of cells from A1 to A4, and you want to extract the first 3 characters from each cell. Let’s see how to do that using the LEFT function in Excel. 1. Select all the result cells where you want the extracted data, such as B1 to B4 2. With these cells selected, enter the formula with a range like this: =LEFT(A1:A4,3) 3. Click Ctrl+Shift+Enter for the result. Example #5: Using Cell Reference for Num_chars Argument Let’s say you have a cell that contains the number of characters you want to extract. Let us see how to use a cell reference in the Num_chars argument. Suppose Cell B1 has the value 4. Now, you can use the cell B1 as a cell reference in the LEFT formula to extract the first 4 characters from cell A1. You can use the below formula: Can You Use the Left Formula With Dates? It is not possible to use the LEFT formula with dates because dates have a specific format. Excel stores date values as numbers at the backend. Thus, using the LEFT formula on a date in Excel will give you a series of numbers representing the date rather than a part of the date itself. Therefore, when you use LEFT function on a date: 1. You might lose important date details like the day or year. 2. Different date formats can give you different results. 3. It can lead to errors if dates aren’t in the expected format. Solution: Change the format of the cell To work with dates correctly, you can either use functions designed for dates, or you can convert the format of the cell from Date to Text. As a result, you will be able to extract the date from the cell. Left Formula Errors Using the LEFT formula with certain types of characters (non-printable, non-text, etc.) or making any other mistakes can lead to errors. Here are a few errors that can occur while using the LEFT 1. LEFT Function does Not Print Non-Text Data As the LEFT function usually works with text data, it might not work as expected if you use it on dates, or other non-text data (images, audio, binaries, currency, etc). Thus, ensure that the data you are working with is in text or general. In the image below, when trying to extract the first three characters (“$20”), the function doesn’t account for the dollar symbol. Instead, it shows the following three characters as “200.” 2. #VALUE Error Due to Negative Character Count If the num_chars argument is not a positive number, i.e., if it’s zero or negative, the LEFT function will return the #VALUE! Error. 3. Extra Spaces or Non-Printable Characters can affect the result If there are extra spaces in the text, it can affect the results of the LEFT function. For instance, in the below image, the text in cell A1 has a single space in the beginning of the sentence. So, when we try to extract 5 characters, we get the result as “ Lead” rather than “Leadi”. Advanced Applications of LEFT Formula in Excel Apart from using the LEFT function on its own, we can combine the LEFT function with other functions for more advanced applications. 1. LEFT with LEN Function Let’s say you have the text string in cell A1 and you want to remove the last 3 characters, we can use the combination of LEFT and LEN functions: =LEFT(A1, LEN(A1) – 3) 2. LEFT with FIND Function Let’s say, you want to extract text from cell A1 up to “New”. You can use the below formula: =LEFT(A1, FIND(“New”, A1) – 1) This formula finds the position of the “New” using FIND, then extracts the desired text using LEFT. 💡Note: The FIND function is case-sensitive when searching for text within a cell. Thus, make sure to write the word in the proper case. For example, if you put “New” in lowercase, then it will show a #VALUE! Error. 3. LEFT with SEARCH Function The LEFT with SEARCH formula works the same way as LEFT with FIND. The only difference is SEARCH is not case-sensitive. For instance, when you use SEARCH, it will find the desired text regardless of whether it’s written as “new,” “NEW,” or “New,” focusing only on the term “New.” 4. LEFT with VALUE Function Let us take an example of a cell that has alphanumeric values. Suppose we want to display the first three characters in a number format. If we simply use the LEFT formula, we will get the result in text format not number format. Thus, we can use the LEFT function with the VALUE function and convert the characters into numerical format. 💡Note: In the above image, the content in cell A1 is in text format while the numbers in cell B1 are in number format. Frequently Asked Questions (FAQs) Q1. What are the other softwares in which we can use the LEFT function? Answer: Apart from Excel, you can use the LEFT function in various software and programming languages, including: 1. Tableau 2. Power BI 3. DAX (Data Analysis Expressions) 4. SQL (Structured Query Language) 5. JavaScript These tools and languages provide the LEFT function to manipulate and extract characters from text strings similarly across different platforms. Q2. Does the LEFT function work similarly to Excel in Google Sheets? Answer: Yes, the LEFT function works similarly in Google Sheets as it does in Excel. It extracts characters from the start of a text string, and its usage is consistent between the two programs. Q3. Can you use the LEFT function with Excel VBA? Yes, we can use the LEFT formula with VBA, for which you can use the Microsoft Visual Basic Editor. You can either give sheet reference for the source cell or enter the word directly, as shown below, Recommended Articles This article is a guide to the LEFT formula in Excel. Here we discuss how to use the LEFT function in Excel using several examples. We have also provided a downloadable Excel template. You can also go through our other suggested articles,
{"url":"https://www.educba.com/left-formula-in-excel/","timestamp":"2024-11-13T11:08:46Z","content_type":"text/html","content_length":"364789","record_id":"<urn:uuid:2a301fb7-7b9b-4143-9c7c-a75c7c743669>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00180.warc.gz"}
The A<sub>α</sub> spectral moments of digraphs with a given dichromatic number The A[α]-matrix of a digraph G is defined as A[α](G)=αD^+(G)+(1−α)A(G), where α∈[0,1), D^+(G) is the diagonal outdegree matrix and A(G) is the adjacency matrix. The k-th A[α] spectral moment of G is defined as ∑[i=1] ^nλ[αi] ^k, where λ[αi] are the eigenvalues of the A[α]-matrix of G, and k is a nonnegative integer. In this paper, we obtain the digraphs which attain the minimal and maximal second A[α] spectral moment (also known as the A[α] energy) within classes of digraphs with a given dichromatic number. We also determine sharp bounds for the third A[α] spectral moment within the special subclass which we define as join digraphs. These results are related to earlier results about the second and third Laplacian spectral moments of digraphs. • UT-Hybrid-D • Dichromatic • Laplacian spectral moment • A spectral moment Dive into the research topics of 'The A[α] spectral moments of digraphs with a given dichromatic number'. Together they form a unique fingerprint.
{"url":"https://research.utwente.nl/en/publications/the-asub%CE%B1sub-spectral-moments-of-digraphs-with-a-given-dichromati","timestamp":"2024-11-03T15:54:37Z","content_type":"text/html","content_length":"54883","record_id":"<urn:uuid:95f7b02c-89b6-42d5-9aea-d9d2d94574d6>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00836.warc.gz"}
Topics: Energy Conditions General Idea > s.a. quantum field theory effects; singularities; energy-momentum tensor. * Idea: We try to define in some way the notion of positivity of the local energy density, even without having a definition of energy density. * Remark: If we try to think of the Einstein equation as giving T[ab] once we specify some g[ab], the problem is that the solution will in general not satisfy the energy conditions; There are indications that the energy conditions should be thought of as conditions on the geometry, rather than the matter. * Negative energy densities: They are predicted for quantum fields in black-hole radiation; In addition, all of the local conditions below have been experimentally tested in the lab, and shown not to hold for the Casimir effect (s.a. refs on Lorentzian wormholes); It is not clear whether the averaged WEC holds in those cases, but it seems that it could be violated as well. @ General references: Visser & Barceló gq/00-conf; Carter gq/02-conf [and vacuum stability]; Barceló & Visser IJMPD(02)gq-GRF; Curiel a1405 [primer, conceptual]; Wall PRL(17)-a1701 [novel method for deriving energy conditions, classical and quantum]; Martín-Moruno & Visser a1702-ch [and semiclassical]; Kontou & Sanders a2003-CQG [rev]. @ Extensions: Martín-Moruno & Visser PRD(13)-a1305, JHEP(13)-a1306 [flux energy conditions and other semiclassical replacements]; Martín-Moruno & Visser a1510-MG14 [non-linear energy conditions and their quantum extensions]; Maeda & Martínez a1810 [in arbitrary dimensions]. @ In modified gravity: Santos et al PRD(07)-a0708 [f(R) theories]; Capozziello et al PLB(14)-a1312; Capozziello et al PRD(15)-a1407 [in extended theories]; Zubair & Waheed ASS(15)-a1502 [f(T) gravity]; Parikh & van der Schaar PRD(15)-a1406 [derivation of null energy condition from worldsheet string theory]; Bamba et al GRG(17) [f(G) gravity]; Capozziello et a PLB(18)-a1803 [f(R) cosmology]; Ilyas IJGMP(19)-a1907 [in non-local gravity]. @ Operationally, with detectors: Helfer gq/96, CQG(98)gq/97. @ At bounces: Tippett & Lake gq/04; Giovannini PRD(17)-a1708 [averaged energy conditions]. @ Other cosmology: Gong & Wang PLB(07)-a0705 [acceleration]; Lima et al proc(10)-a0812. @ Worldline quantum inequalities: Fewster CQG(00)gq/99, PRD(04)gq, & Verch CMP(02)mp/01 [Dirac fields in curved spacetime]. > Related topics: see causality violations; tests of general relativity with light; types of higher-order gravity theories. > Online resources: see Wikipedia page. Null Energy Condition $ Def: A stress-energy tensor \(T_{ab}\) satisfies the null energy condition if \[ T_{ab}\,l^al^b \ge 0\;,\quad{\rm for\ any\ null\ vector}\ l^a\;. \] @ Proofs: Parikh & Svesko PRD(17)-a1511 [from the second law of thermodynamics]; Parikh IJMPD(15)-a1512 [from string theory and thermodynamics]; Koeller & Leichenauer PRD(16)-a1512 [holographic]. @ Quantum null energy condition: Bousso et al PRD(16)-a1509; Fu et al CQG(17)-a1706 [in curved space]; Balakrishnan et al a1706 [general proof]; Fu & Marolf PRL(18)-a1711; Malik & Lopez-Mobilia a1910 [free fermionic field theories]. Weak Energy Condition > s.a. cosmological expansion [constraint on history]. * Idea: Energy density and pressure satisfy ρ + p ≥ 0. $ Def: A stress-energy tensor T[ab] satisfies the weak energy condition if T[ab] t^ a t^ b ≥ 0 , for any causal vector t^ a. @ References: Roman PRD(86) [in quantum field theory]; Bellucci & Faraoni NPB(02)ht/01 [non-minimal scalar field, and definition of T[ab]]. Averaged Null / Weak Energy Condition > s.a. anomalies. $ Def: A stress-energy tensor T[ab] satisfies the averaged energy condition if ∫[γ] T[ab] l^ a l^ b dλ ≥ 0 , for any inextendible null geodesic γ with tangent vector l^ a. @ General references: in Visser PRD(90); Yurtsever CQG(90); Fewster & Osterbrink PRD(06)gq [non-minimally coupled scalar]; Kontou PhD-a1507 [and quantum inequalities]. @ In quantum field theory: Yurtsever PRD(95)gq/94, PRD(95)gq; Verch JMP(00)mp/99 [2D]; Fewster & Roman PRD(03)gq/02; Fewster et al PRD(07)gq/06 [spacetimes with boundaries]; Kelly & Wall PRD(14)- a1408 [holographic proof]; Hartman et al JHEP(17)-a1610 [from microcausality]. @ Variations: Hayward PRD(95)gq/94, CQG(94)gq [quasilocal]; Graham & Olum PRD(05)ht, Graham JPA(06)in [in Casimir effect situations]; Graham & Olum PRD(07)-a0705 [achronal averaged null energy condition]; Urban & Olum PRD(10)-a1002 [and violations]. Dominant Energy Condition * Idea: Energy density and pressure satisfy ρ ≥ 0 and |p| ≤ ρ. $ Def: A stress-energy tensor T[ab] satisfies the dominant energy condition if T[ab] t^ a t'^b ≥ 0 , for any two future directed causal vectors t^ a, t'^a. * Relationships: This condition implies the WEC, and is stronger that the positivity of the local energy seen by any observer; It is equivalent to requiring that the local four-momentum T[ab] t^ a seen by any observer be a future-directed timelike or null vector (the speed of energy flow does not exceed the speed of light). Strong Energy Condition $ Def: A stress-energy tensor T[ab] is said to satisfy the strong energy condition if (T:= T^ a[a]) T[ab] t^ a t^ b ≥ − \(1\over2\)T, for any unit timelike vector t^ a. * Relationships: The strong energy condition does not imply the WEC, unless in the definition of the latter we replace "... any timelike vector t" by "... any null vector t", but the former does appear to be a stronger physical requirement. * Applications: Observations suggest that it was violated sometime between galaxy formation and the present. @ References: Zaslavskii PLB(10)-a1004 [and regular spherical black holes]. Violations > s.a. cosmic strings; QED; quantum field theory effects [negative en density]; quantum field theory effects in curved spacetime. * Of nec: The nec can be violated in a consistent way in models with unconventional kinetic terms, such as Galileon theories and their generalizations. * In quantum field theory in curved spacetime: One issue is that the gravitational field will produce vacuum polarization, and the corresponding stress-energy tensor may not satisfy the energy @ In cosmology: Borde & Vilenkin PRD(97) [inflation]; Visser PRD(97)gq; Visser & Barceló gq/00-conf [implications]; Aref'eva & Volovich TMP(08)ht/06 [consistency of models]; Santos et al PRD(07)ap/07 , PRD(07)-a0706 [and conditions on expansion], Lima et al PLB(08)-a0808 [and acceleration, supernova data]; Cattoën & Visser CQG(08) [parameters]; Jamil et al GRG(12)-a1211 [FLRW models in generalized teleparallel gravities]. @ Nec violation and instabilities: Buniy & Hsu PLB(06)ht/05; Dubovsky et al JHEP(06)ht/05; Creminelli et al JHEP(06)ht [violation without instabilities and cosmology]; Buniy et al PRD(06)ht; Rubakov PRD(13)-a1305, PU(14)-a1401 [and universe creation in the laboratory]; Elder et al PRD(14) [solution evolving between satisfying and violating the null energy condition]; Krommydas a1806-MS [and @ Averaged null energy condition: Urban & Olum PRD(10)-a0910 [violation in conformally flat spacetime]; Kontou & Olum PRD(15)-a1507 [proof]. @ In quantum field theory in curved spacetime: Visser PRD(96)gq [Hartle-Hawking vacuum], PRD(96)gq [Boulware vacuum], PRD(96)gq [1+1 Schwarzschild], PRD(97)gq [Unruh vacuum]; Xiong & Zhu IJMPA(07)gq/ 06 [strong energy condition in lqg]. @ In semiclassical general relativity: Flanagan & Wald PRD(96) [back-reaction and ANEC]; Visser gq/97-MG8. @ Wormholes: Barceló & Visser CQG(00)gq, NPB(00)ht [brane world]; Kar et al Pra(04)gq [quantification]; Roman gq/04-MGX. @ And second law: Ford & Roman PRD(01)gq/00; Davies & Ottewill PRD(02)gq. main page – abbreviations – journals – comments – other sites – acknowledgements send feedback and suggestions to bombelli at olemiss.edu – modified 13 aug 2020
{"url":"https://www.phy.olemiss.edu/~luca/Topics/e/energy_cond.html","timestamp":"2024-11-02T15:19:49Z","content_type":"text/html","content_length":"25102","record_id":"<urn:uuid:ed4a1e4f-df50-45ca-b6a9-9b20b9080b73>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00352.warc.gz"}
a decimal 1/241 or 0.004 can be represented in multiple ways (even as a percentage). The key is knowing when we should use each representation and how to easily transition between a fraction, decimal, or percentage. Decimals and Fractions represent parts of numbers, giving us the ability to represent smaller numbers than the whole. But in some cases, fractions make more sense, i.e., cooking or baking and in other situations decimals make more sense as in leaving a tip or purchasing an item on sale. After deciding on which representation is best, let's dive into how we can convert fractions to 1/241 is 1 divided by 241 The first step of teaching our students how to convert to and from decimals and fractions is understanding what the fraction is telling is. 1 is being divided into 241. Think of this as our directions and now we just need to be able to assemble the project! Fractions have two parts: Numerators on the top and Denominators on the bottom with a division symbol between or 1 divided by 241. To solve the equation, we must divide the numerator (1) by the denominator (241). Here's 1/241 as our equation: Numerator: 1 • Numerators are the portion of total parts, showed at the top of the fraction. With a value of 1, you will have less complexity to the equation; however, it may not make converting any easier. The bad news is that it's an odd number which makes it harder to covert in your head. Values like 1 doesn't make it easier because they're small. Now let's explore X, the denominator. Denominator: 241 • Denominators represent the total parts, located at the bottom of the fraction. 241 is one of the largest two-digit numbers to deal with. But the bad news is that odd numbers are tougher to simplify. Unfortunately and odd denominator is difficult to simplify unless it's divisible by 3, 5 or 7. Overall, two-digit denominators are no problem with long division. Now it's time to learn how to convert 1/241 to a decimal. How to convert 1/241 to 0.004 Step 1: Set your long division bracket: denominator / numerator $$ \require{enclose} 241 \enclose{longdiv}{ 1 } $$ Use long division to solve step one. Yep, same left-to-right method of division we learned in school. This gives us our first clue. Step 2: Extend your division problem $$ \require{enclose} 00. \\ 241 \enclose{longdiv}{ 1.0 } $$ Uh oh. 241 cannot be divided into 1. So that means we must add a decimal point and extend our equation with a zero. Now 241 will be able to divide into 10. Step 3: Solve for how many whole groups you can divide 241 into 10 $$ \require{enclose} 00.0 \\ 241 \enclose{longdiv}{ 1.0 } $$ How many whole groups of 241 can you pull from 10? 0 Multiply by the left of our equation (241) to get the first number in our solution. Step 4: Subtract the remainder $$ \require{enclose} 00.0 \\ 241 \enclose{longdiv}{ 1.0 } \\ \underline{ 0 \phantom{00} } \\ 10 \phantom{0} $$ If there is no remainder, you’re done! If you have a remainder over 241, go back. Your solution will need a bit of adjustment. If you have a number less than 241, continue! Step 5: Repeat step 4 until you have no remainder or reach a decimal point you feel comfortable stopping. Then round to the nearest digit. In some cases, you'll never reach a remainder of zero. Looking at you pi! And that's okay. Find a place to stop and round to the nearest value. Why should you convert between fractions, decimals, and percentages? Converting fractions into decimals are used in everyday life, though we don't always notice. Remember, they represent numbers and comparisons of whole numbers to show us parts of integers. This is also true for percentages. So we sometimes overlook fractions and decimals because they seem tedious or something we only use in math class. But 1/241 and 0.004 bring clarity and value to numbers in every day life. Here are examples of when we should use each. When you should convert 1/241 into a decimal Dollars & Cents - It would be silly to use 1/241 of a dollar, but it makes sense to have $0.0. USD is exclusively decimal format and not fractions. (Yes, yes, there was a 'half dollar' but the value is still $0.50) When to convert 0.004 to 1/241 as a fraction Distance - Any type of travel, running, walking will leverage fractions. Distance is usually measured by the quarter mile and car travel is usually spoken the same. Practice Decimal Conversion with your Classroom • If 1/241 = 0.004 what would it be as a percentage? • What is 1 + 1/241 in decimal form? • What is 1 - 1/241 in decimal form? • If we switched the numerator and denominator, what would be our new fraction? • What is 0.004 + 1/2? Convert more fractions to decimals From 1 Numerator From 241 Denominator What is 1/242 as a decimal? What is 2/241 as a decimal? What is 1/243 as a decimal? What is 3/241 as a decimal? What is 1/244 as a decimal? What is 4/241 as a decimal? What is 1/245 as a decimal? What is 5/241 as a decimal? What is 1/246 as a decimal? What is 6/241 as a decimal? What is 1/247 as a decimal? What is 7/241 as a decimal? What is 1/248 as a decimal? What is 8/241 as a decimal? What is 1/249 as a decimal? What is 9/241 as a decimal? What is 1/250 as a decimal? What is 10/241 as a decimal? What is 1/251 as a decimal? What is 11/241 as a decimal? What is 1/252 as a decimal? What is 12/241 as a decimal? What is 1/253 as a decimal? What is 13/241 as a decimal? What is 1/254 as a decimal? What is 14/241 as a decimal? What is 1/255 as a decimal? What is 15/241 as a decimal? What is 1/256 as a decimal? What is 16/241 as a decimal? What is 1/257 as a decimal? What is 17/241 as a decimal? What is 1/258 as a decimal? What is 18/241 as a decimal? What is 1/259 as a decimal? What is 19/241 as a decimal? What is 1/260 as a decimal? What is 20/241 as a decimal? What is 1/261 as a decimal? What is 21/241 as a decimal? Convert similar fractions to percentages From 1 Numerator From 241 Denominator 2/241 as a percentage 1/242 as a percentage 3/241 as a percentage 1/243 as a percentage 4/241 as a percentage 1/244 as a percentage 5/241 as a percentage 1/245 as a percentage 6/241 as a percentage 1/246 as a percentage 7/241 as a percentage 1/247 as a percentage 8/241 as a percentage 1/248 as a percentage 9/241 as a percentage 1/249 as a percentage 10/241 as a percentage 1/250 as a percentage 11/241 as a percentage 1/251 as a percentage
{"url":"https://www.mathlearnit.com/fraction-as-decimal/what-is-1-241-as-a-decimal","timestamp":"2024-11-04T05:51:49Z","content_type":"text/html","content_length":"33213","record_id":"<urn:uuid:92cf49df-6f19-40bf-afc8-e00220668528>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00520.warc.gz"}
Netsweeper: playing Minesweeper without the old-fashioned grid How network science can help you play this fun new variant of the classic computer game. If you were born in the previous century, then chances are high that you have spent quite some hours playing Minesweeper, the classic puzzle game that used to be installed on every computer. In this article, we will present a new network-based version of this game and show how network science can help you play this game. The original game is played on a grid where each square may or may not contain a mine. The goal is to step on all squares that do not contain a mine, while not stepping on any mines. You can step on a square by clicking on it. If the clicked square contains a mine, then the game ends and you lose (your foot). Interestingly, in the initial version, the cursor was a foot that would explode and turn into a bloody stump when stepping on a mine. This cursor was removed in later versions for obvious reasons. If the square does not contain a mine, then it will show the number of mines that are on the 8 adjacent squares. If there are no mines on adjacent squares, then these adjacent squares will also automatically be ‘stepped on’. The game is won when all squares without mines have been stepped on. If this description does not yet trigger nostalgia, then it may be good to try out the original game before reading the rest of the article. There exist many variants of this game, including versions with triangular grids, hexagonal grids and three-dimensional layouts. If we look at this game and these variants more abstractly, then what all these versions have in common is that the ‘minefield’ can be described as a number of locations (squares, triangles, hexagons or cubes) and some way of telling which locations are adjacent. The minefield could then be viewed as a network where each node corresponds to a location and where adjacent locations are connected by a link. This abstract way of looking at the game allows us to come up with many different versions of the game by simply replacing this network. For example, we could simply take the twelve provinces of the Netherlands as nodes and connect them by a link if they share a border. We could go even further than this by abandoning the notion that the network should represent adjacent shapes. For example, we could create a network out of the UEFA EURO 2020 football cup where each node represents one of the 24 football teams and each link represents one of the 51 matches. In such a game, would you start by stepping on the team that won the tournament (and thus played against the most other teams), or would you start with a team that dropped out at the beginning? Introducing Netsweeper This gives us a far more general version of the original Minesweeper game, that we name Netsweeper, since we replace the old-fashioned grid by a network. The game can be played at the bottom of this page. In the remainder of this article, we will discuss strategies for playing this game, and how this strategy depends on the characteristics of the network that we are playing on. This will help you sweep through any network, and you might also learn some network science on the way! Basic deduction rules We start by summarizing two basic strategies of Minesweeper that can also be followed for Netsweeper: firstly, if after stepping on a node, it indicates that it has one mine in its neighborhood, while we see that all but one neighbors have already been stepped on, then we can deduce that this last neighbor must contain a mine. Of course, this rule also works for numbers higher than one: if the number indicated on the mine is equal to the number of unknown neighbors, then we know that all these neighbors contain mines. Secondly, if we have located a mine (and conveniently marked it with a flag) and see that one of its neighbors has a 1 on it, then we know that all other neighbors of that 1-node cannot contain mines, so that we can safely step on them. Again, this rule also works for higher numbers of mines. Avoiding hubs If the network contains some nodes with many links, it may affect the way you play the game. In network science, nodes with many links are often referred to as hubs. The neighborhood of such a hub likely contains many mines, along with many non-mines. Therefore, when we step on such a node and get to know the number of mines in its neighborhood (assuming it itself is not a mine), it is unlikely that it will allow us to apply any of the deduction rules above. Therefore, especially at the beginning of the game, it can be helpful to avoid stepping on hubs and focus on the nodes with fewer neighbors. Divide and conquer One disadvantage of playing minesweeper on a network rather than a grid is that the links may sometimes clutter the view, making it difficult to get a good overview of the network. In our game, when clicking on a node that has already been stepped on and does not neighbor any mines, the node and its links will be deleted. This is especially helpful when the network consists of loosely connected components. When disconnecting these, one can look at each of the components independently, as several smaller minefields. Symmetry and randomness If the network that we are playing on has many symmetries, then this may influence the gameplay. Take for example the network shown on the right, which is known as the Petersen graph. You may notice that this network looks somewhat symmetric, since rotating the figure by one fifth will not change the image. Actually, this network is completely symmetric: we can move any node to the place of another node and then rearrange the other nodes so that we obtain the original image again. Therefore, at the beginning of the game, it doesn’t matter on which node you click, as they are all the same. Now suppose that we know that this network contains three mines and that after stepping on the first node, we see it has one mine in its neighborhood. Then we know that one of the three neighbors must have a mine, while the six remaining nodes must contain two mines. This means that, whichever node we click, we have a 2/6=1/3 probability of stepping on a mine. Again, the symmetry results in (probabilistically) the same outcomes for every move. Randomly generated networks In this article, I named several networks on which Netsweeper can be played. However, there are many more possibilities, and some are already available in the game. Two of these networks are also randomly generated, so that each time you restart the game, you will step into a different minefield. The first and simplest random network is the Erdős–Rényi network. Here we have a number of nodes, and for each pair of nodes, we randomly decide (say, by tossing a biased coin) whether to place a link between them or not. The second random network is the preferential attachment network. A network that is generated by iteratively adding nodes to the network and connecting (attaching) them to existing nodes randomly, but with a higher probability (preference) of connecting to nodes that already have many neighbors. This results in a network that likely contains a few hubs, which makes the game more difficult. For more about preferential attachment networks, you can read this article. The game can be played below. If you are on your computer, flags can be placed by right-clicking. When you’re on the phone, you can place flags by toggling to “Flag mode” before clicking on the node where you want to place the flag. To change the network, select one in the drop-down menu and click the smiley to restart the game. This gives us endless minefields to sweep. Each network has its own characteristics which influence the gameplay and strategy. Do you notice this difference when playing on the different networks? And can you also see which characteristics cause these differences? Related articles
{"url":"https://www.networkpages.nl/playing-minesweeper-without-the-old-fashioned-grid/","timestamp":"2024-11-09T04:14:23Z","content_type":"text/html","content_length":"88133","record_id":"<urn:uuid:bed2b2e5-fe3f-4da6-9d4b-b801d7421d05>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00378.warc.gz"}
Collision Time Collision Time Calculation In his science fiction novel "The Black Cloud" Fred Hoyle introduces a mathematical concept that at first seems very strange. An astronomer finds a dark region on a photographic plate he has just exposed. A few days later another image is taken. The black 'object' is still there in the same place with respect to the stars, but it has increased in size. The situation is illustrated by the two stellar images shown below: Between the two observations the object has 'grown' by 20.5 arcseconds. Now we only know the angular diameter of the object at the two times, We don't know its true size (diameter) and we don't know how fast it is travelling. Despite this our intrepid astronomer calculates that the object will collide with the Earth in 10 days! (Hoyle even shows how to do this calculation as a footnote to the story!) The first question we should ask ourselves is "How do we know a collision is going to occur at all?" The situation is similar to one experienced in terrestrial aviation (shown at right). If the pilot of the fast plane has a constant bearing to the plane on his right, then he knows that a collision will occur, and he must change his direction and/or altitude. If the bearing angle B[c] increases with time, the fast plane will arrive at the point X before the slow plane and he will pass in front without a collision. If the bearing angle decreases with time the slow plane will arrive at the point X before the fast plane and again there will be no collision. However, if the bearing angle is constant, the pilots know that a collision is certain, despite not knowing the speed of the other aircraft and their distances to the intersection point. In the astronomical case the astronomer knows that a collision will occur because the object remains in the same place relative to the starry background. Of course, he has to assume that the instrinsic size of the object does not change with time, and that the object's velocity is also contant. That is, it travels in a straight line with constant speed. Now in reality, the Earth and the other bodies in the solar system orbit around the Sun in elliptical orbits. Any visitor to the solar system would also be influenced by the Sun and would show some type of curve, even it is not an ellipse but rather a parabola or hyperbola. However, over short periods of time, a small section of the orbit will be close to a straight line. So even only a small shift in position between the two images would be cause for concern, for any object that showed significant growth in angular diameter. The geometry of the situation is shown below: Let us denote the unknown instrinsic diameter of the object as 'a', and its unknown speed (relative to the Earth) as 'v'. The only measurements we can make are apparent angular diameters θ[1] at time t[1] and θ[2] at time t[2]. Let us designate the time between the two observations as Δt[o] = t[2] - t[1]. Furthermore let us designate the time between the second observations and the time of collision as Δt[c] = t[3] - t[2]. This is the remaining time until collision occurs. Now as long as the angle θ subtended by an object is not too large (ie less than a few degrees), we can use the relation {θ = object_diameter / object_distance}. Thus: θ[1] = a / d[1] or d[1] = a / θ[1] θ[2] = a / d[2] or d[2] = a / θ[2] Now from the relation that {distance = speed x time} we have: which by substitution gives: v Δt[o] = a ( 1/θ[1] - 1/θ[2]) [Equation 1] In a similar fashion we find that: v Δt[c] = a / θ[2] [Equation 2] Dividing equation [2] by equation [1] and rearranging a little gives us the solution: Δt[c] = θ[1] Δt[o] / ( θ[2] - θ[1] ) an expression which involves neither 'a' nor 'v'. That is, we do not have to make any assumption about the object size or speed. For the above example we can substitute the values of time and subtended angles to give the time to collision: Δt[c] = 8.5 x (25 - 1) / ( 29.0 - 8.5 ) = 10 days Note that whatever the units (days, hours or seconds, etc) we use for Δt[o] will be the same units in which Δt[c] appears. Australian Space Academy
{"url":"http://spaceacademy.net.au/flight/emg/colltime.htm","timestamp":"2024-11-02T23:30:05Z","content_type":"text/html","content_length":"5916","record_id":"<urn:uuid:330856b5-6c41-4a6e-8239-2b4d5b5f0744>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00396.warc.gz"}
Maximum CVaR Deviation for Gain (max_cvar_dev_g) Maximum CVaR Deviation for Gain. There are M Linear Loss scenario functions (every Linear Loss scenario function is defined by a Matrix of Scenarios). M new CVaR Deviation for Gain functions are calculated (for every -(Loss) scenario function). Maximum CVaR Deviation for Gain is calculated by taking Maximum over M CVaR Deviation for Gain functions (based on -(Loss) scenarios). max_cvar_dev_g(α,matrix_1,matrix_2,...,matrix_M) short call max_cvar_dev_g_name(α,matrix_1,matrix_2,...,matrix_M) call with optional name matrix_m is a Matrix of Scenarios: where the header row contains names of variables (except scenario_probability, and scenario_benchmark). Other rows contain numerical data. The scenario_probability, and scenario_benchmark columns are is a confidence level, Mathematical Definition Maximum CVaR Deviation for Gain function is calculated as follows is CVaR Deviation for Gain function, M = number of random Loss Functions = vector of random coefficients for m-th Loss Function; = j-th scenario of the random vector , is an argument of Maximum CVaR Deviation for Gain function. Data for calculation of Maximum CVaR Deviation for Gain are represented by a set of matrices of scenarios which may be in pmatrix form. See also Maximum, Maximum for Gain, Maximum Deviation, Maximum Deviation for Gain, Maximum CVaR , Maximum CVaR for Gain, Maximum CVaR Deviation, Maximum VaR , Maximum VaR for Gain, Maximum VaR Deviation, Maximum VaR Deviation for Gain
{"url":"https://aorda.com/html/PSG_Help_HTML/max_cvar_dev_g.htm","timestamp":"2024-11-06T10:50:44Z","content_type":"text/html","content_length":"20140","record_id":"<urn:uuid:c48aba59-5409-4401-abe3-81550203f641>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00394.warc.gz"}
YTF 22 The ghost-free massive gravity theory of de Rham, Gabadadze and Tolley (dRGT) has attracted a lot of attention since its formulation over a decade ago. Many studies have looked at its consequences for cosmology, and explored various limits in which the theory simplifies. However, until now few attempts have been made at numerically simulating its full non-linear equations, as an explicit dynamical formulation, analogous to the ADM formulation of GR, was not known. In this talk, based on work with de Rham, Tolley and Wiseman, I will briefly introduce the history and nuances of the formulation of massive gravity. I will then outline a dynamical formulation for the minimal and next-to-minimal dRGT models with a flat reference metric, explicitly identifying the phase-space variables, their associated momenta, as well as the evolution and constraint equations. I will go over the construction of initial data, which, like in GR, must still obey the Hamiltonian and momentum constraints. Finally, the techniques developed will be applied to perform numerical spherically symmetric gravitational collapse of scalar field matter for the minimal model, finding generically that this model breaks down before any large curvatures can appear.
{"url":"https://conference.ippp.dur.ac.uk/event/1141/timetable/?print=1&view=standard","timestamp":"2024-11-01T20:12:00Z","content_type":"text/html","content_length":"246338","record_id":"<urn:uuid:53070066-4fe0-4a72-81a3-fda8eeccf0ae>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00293.warc.gz"}
Sequence Calculator - Daisy Project India Sequence Calculator If you would like to remedy more issues, you can merely enter them within the new window and get results. It should be famous that the numbers of a sample have to be comma-separated, in any other case, the calculator would not work. If you are installing your personal carpet, you will need other supplies (floor tape and tackless strip) and specialized tools (e.g., knee-kicker, energy stretcher, and binder bars) as well. Optionally, estimate the value of the carpet and padding you would like to use by adding the value. The difference between combos and permutations is that whereas when counting mixtures we do not care about the order of the things we combine with permutations the order issues. Permutations are for ordered lists, whereas combos are for unordered groups. Use this nCr calculator to easily calculate the number of combinations given a set of objects (types) and the number you have to draw from the set. N choose K calculator on-line to calculate how many combinations with N numbers are • A Solve the Pattern Calculator works by taking in a sample of numbers after which fixing a mathematical expression for the stated sample. • Browse via our wheels and spin to randomize your life and make the choices that don’t have any incorrect solutions. • Keep in thoughts that while carpet and padding are bought by the yard, some brands will offer the prices in square footage, as most individuals are typically aware of the tough square footage of a space. • Of course if you understand your year make and model, simply use our bolt sample software to seek out your lug pattern. • If you are considering another flooring choice, corresponding to tile, attempt our tile calculator. Browse by way of our wheels and spin to randomize your life and make the choices that haven’t any incorrect answers. This is how we remedy the sequence by finding the mathematical answer to the worth an. Pressing the button will open a new window in front of you with the solution. In some cases, repetition of the identical factor is desired in the combinations. The formulation for its answer is supplied above, however in general it’s more handy to only flip the “with repetition” checkbox in our mixture calculator and let us do the be just right for you. Costs for carpet can range widely from $9/sq yd ($1/sq ft) for apartment-grade materials to over $90/sq yd ($10 /sq ft) for luxurious pure supplies. You need to use carpeting that prices $54/sq yd and padding that prices $4.50/sq yd. If you should measure in models other than ft, then you can use our length calculators to convert them to toes. In the following step, you’ll calculate the whole space in sq. toes. Often encountered issues in combinatorics contain choosing k parts from a set of n, or the so-called “n choose k” issues, also referred to as “n choose r”. The Solve the Pattern Calculator is an online calculator designed to search out the solution to your Sequence problems. Depending on the pattern and nap type, you could need to use one 12′ x 14′ size of carpet and either a single 4′ x 12′ length or two 2′ x 12′ lengths, one on both side of the large section. If you’re contemplating an alternative flooring option, such as tile, strive our tile calculator. Sarabeth is an expert within the house and garden industry and was formerly a certified kitchen and toilet designer. She is a topic expert in residence improvement and is often quoted in notable publications. The above equation due to this fact expresses the number of ways for choosing r distinctive unordered outcomes from n possibile entities and is also known as the nCr method. Our online calculators, converters, randomizers, and content are provided “as is”, free of cost, and with none warranty or guarantee. Each software is fastidiously developed and rigorously tested, and our content material is well-sourced, but despite our best effort it is potential they contain errors. We are to not be held liable for any ensuing damages from proper or improper use of the Plug the total lengths and widths into the calculator above to search out the quantity of carpet and pad required in sq. toes and square yards. In our experience, it’s a good idea to order some extra materials to account for cuts. If you need to order some additional, then you possibly can multiply the outcome by 1.1 to add an extra 10%, for instance. Use the outcome to estimate your price by multiplying it by the carpet and padding price per sq. yard. Keep in thoughts that whereas carpet and padding are bought by the yard, some brands will offer the costs in square footage, as most individuals are generally conscious of the tough square footage of an area. However, most individuals pay professionals to install their flooring since it’s a skilled job that normally takes professionals lower than a day to complete. Besides the carpet itself, almost all carpets require padding underneath. Padding is sold in the identical method as carpet but is way inexpensive. Padding ranges from 10 cents to 60 cents or more per sq. foot, or $0.90 to $5.forty a sq. yard. As with any flooring materials, you will want to measure accurately. Using a tape measure, measure the room wall-to-wall, not baseboard-to-baseboard, since baseboards are installed on top of your carpet. The closest you can get is to measure from the middle of one lug to the outer fringe of the lug farthest away. Of course if you know your 12 months make and mannequin, just use our bolt pattern software to search out your lug pattern. For instance, let’s estimate flooring for a living room that’s 12′ x 15′. Roulette has provided glamour, thriller, and pleasure to casino-goers for the explanation that 17th century. The sport is popular worldwide in part because its rules are comparatively simple and easy to know. However, roulette supplies a stunning stage of depth for severe betters. If you desire a fast and easy information to this sport before betting it all on black, keep reading. We’ll break down all of the fundamentals so you know precisely where to position your chips on the table and the means to deal with your winnings. A Sequence is a group of information factors, things when you may however from a mathematical standpoint, these could be Numbers, which are Ordered in some form or type. Watch Articles These can all be verified using our ncr formula calculator above. This will provide you with an estimate of the fabric prices on your project. For budgeting purposes, you can add $4.50 to $18 per sq yd ($0.50 to $2 per sq ft) to cowl labor costs. New carpeting can characterize a big funding in your home. Installing it can be difficult for DIYers to take on themselves. Compared to different flooring options, nonetheless, carpet adds exquisite warmth and sound cushioning to your home. Now, before we go deeper into the understanding of how a Solve the Pattern Calculator works step-by-step, we first find out about Sequences in more element. And it does all of it inside your browser with no need for any further downloads. We are dedicated to protecting and respecting your privacy and the security of your knowledge. We comply with GDPR, CCPA, SB 190, SB 1392, and we closely monitor changes to them. We follow business finest practices for knowledge encryption and backups. Keep in thoughts that many sellers offer remnants, so you will not need to double your sq. footage every time. The first step to any new flooring installation is to find out how large of an space you want to cover. Installing beautiful new carpet can add worth and comfort to your personal home. Carpet comes in many varieties in varying levels of high quality and cost. How To Measure Bolt Patterns? The reactive close to field is the region the place the fields are reactive i.e the E and H fields are out of part by ninety levels to one another. For propagating or radiating fields, the fields should be orthogonal to one another however in part. It logically repeats itself as compared to a Mathematical one, but based mostly on the pattern, we are able to calculate the next value to be 97, added with 24. To Solve a given sample or sequence means to seek out out the values that may Succeed those given to us. This is finished through the use of several techniques, which we are going to undergo here. This Calculator can’t solely find out future values of the sequence but in addition if a viable Mathematical Model exists, it might possibly derive that for the pattern too. A sequence represents some kind of Mathematical Expression on the core of a set of numbers, these could be finite or infinite. But you needn’t fear as this Calculator can solve those issues in a blink of an eye fixed. It also can present a Mathematical Expression describing the sequence itself. And all you need to do is enter the sequence and press the button to get results. Consult our carpet value calculator and worth information to be taught more about estimating the value of flooring. Solve For A Sequence The Solve the Pattern Calculator is used to resolve for future values of a Sequence; it analyzes and predicts the values which might come next within the sequence. This Calculator is certainly unique, because there isn’t any straightforward methodology of doing this, and it takes lots of hits and trials to get the answer to such an issue. Unlike another flooring materials, carpet is available in standard widths. The commonest measurement is 12′ widths, but sometimes yow will discover widths of 13′ 6″ and 15′. A lot of occasions in common usage individuals name permutations”mixtures” incorrectly. In another example – if you would like to estimate how many computing hours you want to brute force a hashed password you calculate the number of permutations, not the number of combinations. The result’s the number of all attainable methods of selecting r non-unique elements from a set of n parts. In some variations of the above formulas r is changed by k with no change of their end result or interpretation. So we begin off the identical analyzing strategy for solving this problem, and we can see that the sample is somewhat more sophisticated to get with out the mathematical expression, so let’s try to make sense of it. Depending on the place doorways are positioned and the shape of the room, this will likely change the path the carpet is installed in. So, you will need to add further length to get adequate carpet for the extra width. When planning any new flooring project, you will need to determine how a lot flooring you need. Stay tuned, we’ll present you how to easily estimate carpeting for your flooring Different Helpful Tools: Here is a desk with options to generally encountered mixture issues generally recognized as n select k or n choose r, relying on the notation used. Track the patterns of roulette wins with this printable run sheet that lists numbers, evens/odds, and colours. Wheel Decide is a free online spinner tool that lets you create your own digital wheels for choice making, prize giveaways, raffles, games, and extra. Then, to get the world in square yards, divide this by 9, since there are 9 sq. feet per sq. yard. To do that calculation your self, multiply the floor width by size in feet to get the sq. footage. If you measured a number of sections, do this for each, then add the square footage of each section collectively to get the whole sq. footage. If the ground plan is complicated or there are multiple rooms, break the house up into smaller, regularly formed sections. Measure each independently, and after you calculate the square footage of every, add them collectively to get the whole quantity for the realm. There isn’t any functionality to discover out which entry will win forward of time. When you click the wheel, it accelerates for exactly one second, then it’s set to a random rotation between zero and 360 degrees, and eventually it decelerates to a cease. The setting of a random rotation is not seen to the bare eye as it occurs when the wheel is spinning quite fast. Click here to study extra in regards to the Near Field and Far Field regions. Calculate the following entry of the sequence and in addition find the mathematical mannequin of this sequence. Carpet Calculator And Value Estimator To assist you to estimate your cost, you can optionally enter the cost of the carpet and/or the padding you want into the calculator above. However, because yards are larger, you might discover that you have to order barely extra carpet in some circumstances, which makes getting your estimates in sq. yards more accurate than in square feet. Most manufacturers promote carpet by the square yard, which is the identical as 9 sq. toes. It might be best to measure your space in toes, then convert to sq. yards later. To get a four, 6 or eight lug bolt sample use a device to measure from the center of one lug to the middle of the lug immediately accross from it. The knowledge of the two are interlinked, and the operation may be understood instantly. Press enter or the mobile phone keyboard to switch lines to automatically generate the next item, which could be very handy to operate. The input mode is suitable for people who immediately paste from the clipboard or other locations, in fact, that is only a personal usage behavior. This example will show tips on how to estimate carpet when the nap can be installed in a different direction. There are three conditions which must be glad to ensure that the antenna is at a distance which qualifies as the far area. The radiating close to area or Fresnel region is the area between the reactive near and much area. However unlike the far area region, the form of the radiation sample varies significantly with distance. A Solve the Pattern Calculator works by taking in a pattern of numbers and then fixing a mathematical expression for the mentioned sample. These patterns are also known as Sequences, as one of the very popular sequences is a Fibonacci Sequence. Keep in thoughts that buying and renting these things will increase the price of your DIY project. Use our bolt pattern device to search out bolt patterns for any automobile. Select a yr,make,mannequin and option from the drop downs to find the bolt patterns on your vehicle. When talking about antennas the far subject is the region that is at a big distance from the antenna. In the far area the radiation pattern does not change shape as the distance will increase. Ideally, the nap or the pattern of the carpet ought to all run in the same course. Changing course at a minimize or at the end of a run might end in an unprofessional look, relying on the fabric used. To estimate carpeting amount and cost, enter the size of the realm you wish to cover.
{"url":"https://daisyprojectindia.org/1363674174615815650-2/","timestamp":"2024-11-07T22:51:09Z","content_type":"text/html","content_length":"148502","record_id":"<urn:uuid:70ff4840-4642-4c89-a3b2-662f890c1060>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00338.warc.gz"}
How do I prepare for the section on Green's Theorem in the multivariable calculus exam? | Hire Someone To Do Calculus Exam For Me How do I prepare for the section on Green’s Theorem in the multivariable calculus exam? This article has been already read 5 times. Now I have to explain it to the reader. I will do the math, but some questions don’t seem to make it easier to answer. I will explain them through an “ask”. Let $s$ be a smooth surjective function. This can be seen as the sum of the derivative of $s$ with respect to $X$ of the curve $(X,\delta_0)\in\CH_\infty$ from $\{X\in{\mathcal{A}}_1,X\in{\mathcal{A}}_2\}$. I take the restriction of $s$ to $\CH_\infty$ from the definition of Cartan sectional curvature, so $\chi_0(s)=2\chi_0(s’)$. Then the function $\theta\in S^1(\G\times\C^2)$ is a smooth section of $\CH_\infty$ and has as its isomorphism class all possible $k$-forms $f^k$. The origin of the topological invariateness of a general smooth function $s$, and which has properties similar to the three-torus case, is the divisorial determinant of its Chern character in $\C^2$, this can be seen in the construction of get more (3) from section 1 of Gabriel’s “Theory of Compact Lie algebras” [@Eli97 Appendix 1.2, by S.D. Hariri]. Now let $X$ be any smooth function on $\G^2$, and let $h$ denote its Chern character. The function $h(u,v,t)$ has scalar multiplication by $2\chi\,[(s,x)^{k-1}-(u,v,t)\Bigr]$, where $\chi$ is the inner product of $s$ and $t$. This is the same thing as the determinant ${2{\,{{d}}^M}}=({2\st\,{s}^{p-1}-{2\st\,{u}^{p-1}}-{2\st\,{v}^{p-1}}-{2\st\,u}^{p-1}}-{2\st\,{u}^{p-1}-{2\st\,{v}^{p-1}}-{2\st\,{u}^{p-2}}-{2\st\,v}^{p-1}})$, where $\chi\{x\}$ is the scalar term of the decomposition principle. However, there are (and sometimes quite, though not always) different types of determinant that we shall see in section 4, soHow do I prepare for the section on Green’s Theorem in the multivariable calculus exam? Main question: How are the conditions of this theorem being examined while under study? Main question: How do you handle the case when the parameters $\alpha$ and $\beta$ depend on $\theta$? Main question: How are the conditions of this theorem being examined while under study? Answer: You can skip this question. You have already specified how to cover the case that $\alpha = \beta$, but you have not provided the very particular condition that $\sigma_\alpha (\theta)$ depends on $\theta$ or on $\tau$. As you had mentioned earlier, you can take the “right side” of $T$ as given in the previous section. This way the case where $T$ is closed in $\mathbb{C}$ or real in another integral domain is dealt with first. Take the limit set $\mathcal{S}$ of the limit sets $W$ of $\operatorname{Im}\sigma_\alpha$ and $\mathcal{\Gamma}$ as look at this web-site in the previous section, and a family of closed linear submanifolds, say $X_W$. Take My Accounting Exam When passing to the limit set $X;$ then you can define $X$ to be $X_W$ if $X \; \subset\; X_W$ but instead $X \; \subset E_O;$ for any real $E_O$. In particular when $X$ is real, for arbitrary fixed $\tau$, there is a unique closed linear submanifold $Y_W$ of $X \; \subset\; X_W$, namely one that maps into $\mathcal{S}$ for arbitrarily large $T$, i.e. $$\mathcal{S} = \mathcal{\Gamma}-(K_Y+AHow do I prepare for the section on Green’s Theorem in the multivariable calculus exam? If you have the following questions, which you think can help you, please feel free to e-mail me at [email protected…] Abstract In part 1, the results of this book, Green’s Theorem, is an elegant formulation of group cohomology theory. This section introduces the definition, the theory of group cohomology and, when “constructive” to a theory and is rather general in form, is both the basic and essential components of the theory. The first sections are called the Green’s Theorem for these two topics and we have introduced a see page definition and a non-trivial invariant form for this theory. This section also introduces the theory of “generalized” cohomology or “generalized cohomology group.” The third section (i) describes exactly where this result lies in a literature, since of course in practice it is likely that it belongs to a better known theory. We have shown that this part of the theory is fairly wide of scope (although we have not shown that all parts are broad as the contribution of a read this book to the theory is brief). The result goes as follows: If we had known something close to the answer, it would involve, obviously, the computation of the cohomology and hence cohomology groups by means of the homotopy theory (or homotopy spectral sequence). If we knew nothing of the theory (or the structure of the theory), then we believe this would be of practical significance. This section of the book explains how there is a sense of “generalized” cohomology groups that is all that matters. It is as follows: For example, the abelianization of the group cohomology group by a subgroup of order two will not admit a generalization to the abelianization of the algebraic geometry. Most groups grow using the fundamental groups
{"url":"https://hirecalculusexam.com/how-do-i-prepare-for-the-section-on-greens-theorem-in-the-multivariable-calculus-exam","timestamp":"2024-11-15T03:08:06Z","content_type":"text/html","content_length":"102170","record_id":"<urn:uuid:183dd0f1-2a79-4491-b5ad-4cf7d5cb08e7>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00034.warc.gz"}
Aptitude Questions and Answers This section covers solved exercises of Quantitative Aptitude Questions and Answers on topics like Time and Distance, Time and Work, Averages, Ages, Boats and Streams, Trains, Pipes and Cisterns, HCF, LCM etc. The examples range from easy to difficult questions. The short tricks and explanation provided with each question help in understanding the concept and solving the question quickly. These Quantitative Aptitude Questions and Answers will be useful for all freshers, college students and engineering students preparing for placements and various entrance exams like MBA, CAT, NMAT, XAT, SNAP, MHCET, UPSC, CLAT IBPS, SBI, RRB etc.
{"url":"https://www.tutorialride.com/aptitude-questions.htm","timestamp":"2024-11-08T14:46:55Z","content_type":"text/html","content_length":"10618","record_id":"<urn:uuid:4d9504af-f30f-4249-9b1e-e616284eecc8>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00204.warc.gz"}
Limits and Derivatives Class 11 Limits and Derivatives Class 11 Notes Limits and derivatives class 11 notes cover concepts such as the intuitive idea of derivatives, limits, and trigonometry functions and derivatives. Limits and derivatives have the scope, not only in Maths but also they are highly used in Physics to derive some particular derivations. We will discuss here the Class 11 limits and derivatives syllabus with properties and formulas. Class 11 Chapter 13 Limits and Derivatives Concepts The limits and derivatives chapter of class 11 includes the following concepts. • Introduction • Intuitive Idea of Derivatives • Limits • Limits of Trigonometric Functions • Derivatives Let us learn and understand all the concepts in limits and derivatives. Limits Definition A limit of a function f(x) is defined as a value, where the function reaches as the limit reaches some value. Limits are used to define integration, integral calculus and continuity of the function. If f(y) is a function, then the limit of the function can be represented as; This is the general expression of limit, where c is any constant value. But there are some important properties of limits which we will discuss here. Also, check: Sandwich theorem Properties of limits of a given function A. Let ‘p’ and ‘q’ be any two functions and ‘a’ be any constant value in such a way that, there exists, lim[x→a] p(x) and lim[x→a] q(x). Now, check below the properties of limits. B. For any positive integers m, lim[x→a] p(x)(x^m-a^m )/(x-a)= na^m-1 Also, learn about the algebra of limits here. C. Limits of trigonometric functions: If p and q are real-valued functions with the same domain, such that, p(x) ≤ q(x) for all the values of x. For a value b, if both lim[x→a] p(x) and lim[x→a] q(x) exists then, lim[x→a] p(x) ≤ lim[x→a] q(x) To understand more about the limits of trigonometric functions, visit here. Example:Let f(x) = x^2 – 4. Compute \(\lim_{ x \rightarrow 2} f(x)\). Solution: \(\lim_{ x \rightarrow 2} f(x)\) = \(\lim_{ x \rightarrow 2} x^2 – 4\) = 2^2 – 4 = 4 – 4 = 0 To know the limits of polynomials and rational functions, visit here. Derivatives Definition A derivative is defined as the rate of change of a function or quantity with respect to others. The formula for derivative can be represented in the form of; \(\lim_{ a \rightarrow 0} \frac{f(x+a)- f(x)}{a}\) The derivative of a function f(x) is denoted as f’(x). Now, let us see the properties of derivatives. Properties of derivatives for given functions: Let for functions p(x) and q(x), the properties of derivatives are; Also, read: Derivatives of polynomials and trigonometric functions Let’s have a look at the example given below: Example: Calculate d/dx(x^4+1) We know, d/dx(x^n)= n x^n-1 and the derivative of a constant value is 0. d/dx(x^4+1)= 4x^3 General Derivative Formulas Derivative of a function Result \(\frac{d(x)}{dx}\) 1 \(\frac{d(ax)}{dx}\) a \(\frac{d(x^n)}{dx}\) nx^n-1 \(\frac{d(sin\ x)}{dx}\) cos x \(\frac{d(cos\ x)}{dx}\) -sin x \(\frac{d(tan\ x)}{dx}\) sec^2x \(\frac{d(cosec\ x)}{dx}\) -cosec x cot x \(\frac{d(sec\ x)}{dx}\) sec x tan x \(\frac{d(cot\ x)}{dx}\) -cosec^2x \(\frac{d(ln x)}{dx}\) 1/x \(\frac{d(e^x)}{dx}\) e^x \(\frac{d(a^x)}{dx}\) a^x (log a) Get more differentiation formulas here. Limits and Derivatives of Class 11 Important Questions and Solutions Question 1: Evaluate: lim[x→2][(x^2-4)/(x-2)] lim[x→2][(x^2-4)/(x-2)] = lim[x→2][(x+2)(x-2)/(x-2)] Cancel the term x-2 from numerator and denominator. Now we get, lim[x→2] x+2 = 2+2 = 4 Question 2: Solve lim[x→2] (sin 2x/x) Given, lim[x→2] (sin 2x/x) We can write it as; lim[x→2] (sin 2x/2x) × 2 Since, lim[x→2] (sin x/x)= 1 Therefore, lim[x→2] (sin 2x/2x) × 2 = 1 × 2 = 2 Practice Problems 1. Find the derivative of f(x) = sin x + cos x using the first principle. 2. Evaluate: lim[x→4] (4x + 3)/(x – 2) 3. Find the derivative of the function f(x) = 2x^2 + 3x – 5 at x = –1. Also prove that f′(0) + 3f′(–1) = 0 Get more important questions class 11 Maths Chapter 13 limit and derivatives here with us and practice yourself. Download BYJU’S app and learn Maths topics relevant to your class with interactive videos.
{"url":"https://mathlake.com/Limits-and-Derivatives-Class-11","timestamp":"2024-11-05T22:58:54Z","content_type":"text/html","content_length":"16750","record_id":"<urn:uuid:dff45ce6-3777-4995-b41f-eb7954d15768>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00872.warc.gz"}
Tasukuea (from Japanese, literally "find squares") is a type of logic puzzles. It is played on a rectangular or square grid with numbers or question signs in some cells. The goal is to blacken some cells of a grid according to the following rules: • Cells with numbers or question signs can not be blacken. • Black cells form square areas that must not be orthogonally adjacent. • A number in a circle indicates the total number of black cells in areas orthogonally neighboring the numbered cell. • A cell with a question sign must have at least one adjacent black cell. • All the white cells must be connected horizontally or vertically. Cross+A can solve puzzles from 3 x 3 to 30 x 30.
{"url":"https://cross-plus-a.com/html/cros7tsku.htm","timestamp":"2024-11-07T00:32:40Z","content_type":"text/html","content_length":"1637","record_id":"<urn:uuid:70db6640-c782-4671-afbf-c6909d33abe4>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00673.warc.gz"}
Linear Splines Lead Author(s): Peter Bacchetti, PhD A linear spline is used in regression models to allow a predictor to have a non-linear effect on the outcome. This is useful when there is evidence against the linearity assumption or when high interest in the predictor's effect warrants more flexible modeling of its effect. Instead of a single slope, the model fits a line that is allowed to change direction at specified points, called knots, thereby allowing V-shape, U-shape, S-shape, and other non-linear relationships to be modeled. An advantage of this approach is that there is still an interpretable coefficient within each range of the predictor between knots. To fit the model, we create a new predictor variable for each range of the original predictor between knots, and the fitted regression coefficients then estimate the effect of the predictor within that range. For example, if we are modeling the effect of age on systolic blood pressure in a linear regression model, we can use the three predictors: agePre40 = min(age, 40) age40to60 = max(0, min(age-40, 20)) age60up = max(0, age-60) This will fit a linear spline with knots at 40 and 60. The variable agePre40 ranges from the minimum observed age up to 40 and is equal to 40 for anyone who is over 40. Its coefficient estimates the effect per year of age within the under 40 age range. The variable age40to60 ranges from 0 to 20; it is 0 for anyone aged 40 or less, increases from 0 to 20 within the 40 to 60 age range, and is equal to 20 for anyone aged 60 or more. Its coefficient estimates the effect per year of age in the 40 to 60 age range. The variable age60up is equal to 0 for anyone aged 60 or less and is equal to age minus 60 for anyone over age 60. Its coefficient estimates the effect per year within the over 60 age range. Suppose we obtain the following estimated coefficients: 0.10 for agePre40 0.45 for age40to60 0.92 for age60up The interpretation is that predicted systolic blood pressure is estimated to increase by 0.10 for each 1 year increase in age up to age 40, by 0.45 for each 1 year increase in age from 40 to 60, and by 0.92 for each 1 year increase in age after age 60. So the estimated difference between age 30 and 45 would be 10*0.10 + 5*0.45 = 3.25, and the estimated difference between age 25 and age 75 would be 15*0.1 + 20*0.45 + 15*0.92 = 24.3. Using linear splines requires deciding how many knots to have and where to put them. Models with different numbers of knots can be compared in terms of how well they fit the data. Statistical criteria such as the Akaike Information Criterion can be used to decide between simpler (fewer knots) and more complex (more knots) models. Knots are typically placed at natural break points (e.g., decades of age), are evenly spaced in terms of the predictor's values (e.g., 15, 30, 45, 60), or are evenly spaced in terms of quantiles in the data set (e.g., 3 knots at the quartiles of the predictor's distribution). Alternative approaches to modeling non-linear effects of a predictor are polynomial models or breaking the predictor into categories. There are also higher order splines, notably cubic splines. These produce smooth fits to the data (no abrupt change of direction), but the coefficients do not have any simple interpretation and the fit usually must be illustrated by graphing.
{"url":"https://ctspedia.org/ctspedia/linearsplines","timestamp":"2024-11-05T06:32:17Z","content_type":"text/html","content_length":"6790","record_id":"<urn:uuid:b38d27a7-3906-4807-8fbe-f8ea11a3cee0>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00759.warc.gz"}
Visual Effect on Mugs When doing a design with a circle on a mug. When I finish the mug and measure the circle the height and width are exactly the same. Except visually it looks like the circle is elongated vertically. Is the a way to correct for this without stretching the design horizontally As there is no â cylinder correctionâ tool in LB at present for GRBL devices. Check out this video I figured it out. If you measure the top of the mug and move the decimal point over one to the left. Then add that dimension to the width of the design, now the circle will look visually correct. A little bit different method than what the video above shows but much simpler. My design was 58.7mm x 58.7mm The top of my mug measured 87.0mm So i added 8.7mm to the width (Height since its rotated) of my design The design will look stretched in Lightburn. But visually it will look correct when viewed straight on on the mug This method is not perfect and you may have to adjust this formula but the circle now looks much better visually even though it is stretched. A little more testing and at least on the mug i was using the stretch needed was a little less than 10%. More like 8% for the best results LightBurn already has a taper warp calculator in recent versions, you find it in the tools menu: 1 Like Thanks, ill check out the taper warp function. But I donâ t think that would help me with the circle issue. Itâ s all about perspective. A small round logo on a specific diameter tumbler will look closer to correct than a large round logo on that tumbler. Notice how the circle of stars in the center look The problem is compounded the smaller the diameter of the tumbler. Adding 10% to the width is a good starting point, but it varies with the logo size and tumbler diameter. The best way to get it to look correct in my opinion is to print the logo on paper and wrap it around the tumbler. Adjust the width and print it again. Keep adjusting and printing until you get the look that works for your There are formulas to figure this out, but I like to go by my eye because the formula may provide the exact amount of distortion to get a perfect circle when viewed straight on, but you typically donâ t view it perfectly straight on. Therefore I go by what I think looks â most correctâ . The taper warp function does work well for adjusting for the taper of your tumbler, but it doesnâ t account for the cylindrical shape. Taper warp is most evident on a logo with vertical sides, less so on a round logo, but I would stll use it with your tumbler in addition to distorting the width. 1 Like Good advice. Thanks. I donâ t have Taper Warp function in my version so i would have to do it manually. But your correct, taper warp would be useful if printing something with square sides but not so much for a pure circle. Shrink it vertically. Sorry, not trying to be a sarcastic jerk. Just pointing out that if you donâ t want it to wrap around the mug further, the other option is shrinking the vertical height. 2 Likes Either way should work. But the formula only works if you are stretching the width This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.
{"url":"https://forum.lightburnsoftware.com/t/visual-effect-on-mugs/153234","timestamp":"2024-11-12T05:22:49Z","content_type":"text/html","content_length":"33314","record_id":"<urn:uuid:5f96d556-ac49-41d0-a6cc-67d17ad6db98>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00860.warc.gz"}
bytes and stuff A while back I was looking at the different maximum values that the different integer data types (uint8, short, int, long, etc) have throughout a couple languages I’ve been using and noticed that none of them ended in zero. I wondered why that was but then relatively quickly realized that it is because integer data types in computers are made up of bytes and bits. An 8-bit (1-byte) integer has a max value of $2^8 = 256$, a 16-bit (2-byte) integer has a max value of$2^{16} = 65536$, etc. In fact since any integer in a computer is made of bits it will have a maximum value of $2^n$. The prime factorization of an n-bit integer is in the notation, it is n 2s all multiplied together. Like so: $2^8 = 8 \cdot 8 \cdot 8 \cdot 8 \cdot 8 \cdot 8 \cdot 8 \cdot 8$ In order for something to end in a zero it must be a multiple of 10, and the prime factorization of 10 is 5 * 2, and the prime factorization of $2^n$ will never contain a 5. Case closed. That was That got me thinking about figuring out how many zeroes are at the end of a number if all you have is the prime factorization. Using my basic arithmetic skills I found out that: • $2 \cdot 5 = 10$ • $2 \cdot 2 \cdot 5 = 20$ • $2 \cdot 5 \cdot 5 = 50$ • $2 \cdot 2 \cdot 5 \cdot 5 = 10 \cdot 10 = 100$ It appears (although isn’t proof) that the lowest quantity between twos and fives dictates how many zeroes are at the end of a number when you’re looking at its prime factorization. I tried this with many more combinations and it worked with every one of them. So what can we do with this information? A factorial is a number written as $n!$ where for any value $n$, $n! = 1 \cdot 2 \cdot 3 \cdot \ldots \cdot n$ . For example $3! = 1 \cdot 2 \cdot 3$ and $5! = 1 \cdot 2 \cdot 3 \cdot 4 \cdot 5 = 120$. The Wikipedia page for factorials shows that $70! = 1.197857167 \times 10^{100}$. That’s a big number, over a googol. You can see the whole thing here on WolframAlpha. So what if we want to know how many zeroes are at the end of $100!$? Or $10000!$? Calculating $10000!$ directly then counting the zeroes at the end of it seems unlikely, or at least not easy, since a number like that wont fit into a standard computer int data type. This is where the prime factorization comes in, we can get the prime factorization of all numbers that go into $n!$. $n! = 1 \cdot 2 \cdot 3 \cdot \ldots \cdot n = 1 \cdot 2 \cdot 3 \cdot (2 \cdot 2) \cdot 5 \cdot (3 \cdot 2) \cdot \ldots \cdot [prime \: factorization \: of \: n]$ And since the associative property of multiplication tells us that parenthesis don’t matter (in this case), all we have to do is count the quantity of twos and fives in the prime factorization of $n! $ and see which is lower. This can be quite time consuming for large values of $n$ though because we will have to get the prime factorization of every number between 1 and $n$. We need a program to do that for us. Leading up to this point we’ve already discovered most of the requirements for the program, it must: 1. Iterate from 1 to $n$. 2. Get the prime factorizations of each of those numbers. 3. Put the prime factorizations of all of those numbers into a list. 4. Once all the prime factorizations for all numbers between 1 and $n$ have been found, it needs to go through the list and count how many twos and fives are in the list. I chose C# to do this project for no other reason than I had been working with it quite a bit lately. The start of the program is pretty boiler plate, read a number and pass it into the method that will do the fun stuff. It doesn’t check if the input is valid or anything like that, that’s too much work for something like this. static void Main(string[] args) int n = 0; Console.Write("Enter a number n, or -1 to quit: "); string str = Console.ReadLine(); n = int.Parse(str); //convert string to int while(n >= 0) Console.Write("Enter a number n, or -1 to quit: "); str = Console.ReadLine(); n = int.Parse(str); The CountZeroesAtEndOfFactorial() method counts the number of zeroes at the end of the number by finding which quantity is lower, twos or fives: private static void CountZeroesAtEndOfFactorial(int number) List<int> factors = GetPrimeFactorizationOfFactorial(number); int twos = 0, fives = 0; foreach (int num in factors) if (num == 2) twos = twos + 1; if (num == 5) fives = fives + 1; //get the smaller quantity int zeroCount = (twos < fives) ? twos : fives; Console.WriteLine("{0}! has {1} zeroes at the end", number, zeroCount); The list of numbers that the CountZeroesAtEndOfFactorial() method uses comes from the method GetPrimeFactorizationOfFactorial(). This is the method that iterates through every number between 1 and $n$, gets the factors in its prime factorization, and puts them into a list. private static List<int> GetPrimeFactorizationOfFactorial(int num) List<int> list = new List<int>(); if (num < 2) return list; for (int i = 2; i <= num; i++) return list; There’s still the matter of getting the prime factorization for any given number though. That’s accomplished with the method GetPrimeFactorization(). This method is based on the fact that if a given number, $n$, is not prime, then there exists some combination of numbers, $a$ and $b$, such that $n = a \cdot b$. From there we know that $a = \dfrac{n}{b}$ (or $b = \dfrac{n}{a}$, it doesn’t really matter which). The important part here is that $\dfrac{n}{b}$ has a remainder of zero. All modern programming languages that I’ve used have an operator that gives you the remainder of an integer division operation, the modulus operator, %. So we’re looking for a number, $b$, that gives us a remainder 0, or in math terms we want $n \: mod \: b = 0$. We could try dividing $n$ by every number between 1 and $n$, but at some point our efforts will not be necessary. This point is the square root of $n$. Since we’re dealing with integers however, we’ll stop when the square of the number we’re testing is greater than $n$. private static List<int> GetPrimeFactorization(int num) List<int> list = new List<int>(); if (IsPrime(num)) return list; int i = 2; while (i * i <= num) if (num % i == 0) int result = num / i; if (IsPrime(result)) if (IsPrime(i)) return list; return list; So this method has two possibilities when finding the prime factorization for a number $n$. 1) $n$ is prime, its prime factorization is itself, just return that value, easy. 2) $n$ is not prime. So there is some number $a$ such that $b = \dfrac{n}{a}$ with a remainder of zero. In this case $a$ and $b$ are factors of $n$, not necessarily the prime factorization of $n$. Since there exists the possibility that there are two more numbers such that $a = c \cdot d$ or $b = e \cdot f$. As this image illustrates, the tree of prime factorization can be many layers deep: This means that we need to get the factorizations of the numbers $a$ and $b$, and the factorizations of their factorizations, and so on, until all we have are prime numbers. This is done through recursion and why GetPrimeFactorization() calls itself in places. The last piece of the puzzle is the IsPrime() method. private static bool IsPrime(int num) if (num == 1) return false; if (num == 2) return true; int i = 2; while (i * i <= num) if (num % i == 0) return false; return true; This method is quite similar to the GetPrimeFactorization() method, but instead of finding the prime factorization of a number it returns false if it finds a number, $b$, such that $b = \dfrac{n}{a}$ It runs fairly fast too. I changed the main method a bit to do multiple runs in one session and timed them, here is a screenshot of the results: Less than one second to calculate the number of zeroes at the end of $1000000!$ isn’t bad, considering that in its full form it has over five million digits. Some final thoughts: 1. This program will only work with values of $n$ that will fit into a 32-bit integer. 2. The quantity of zeroes at the end of $n!$ seems to be roughly a quarter of the value of $n$. What’s up with that? 3. There’s definitely room for optimization on this project, maybe some other time. The program is pretty short, so I’ve just uploaded it to a Gist on GitHub.
{"url":"https://trlewis.net/factorizeroes/","timestamp":"2024-11-14T08:24:41Z","content_type":"text/html","content_length":"52681","record_id":"<urn:uuid:51d14174-8827-4d94-87ad-41c4c5895c18>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00521.warc.gz"}
No Frills Time Series Compression That Also Works CSV + gzip will take you far. • August 22, 2023 So you have some time series data and you want to make it smaller? You may not need an algorithm designed specifically for time series. Generic compressors like gzip work quite well and are much easier to use. Of course this depends on your data, so there’s some code you can use to try it out here. Recently I started working on a way to save Bluetooth scale data in my iOS coffee-brewing app. I want to allow people to record from a scale during their coffee-brewing sessions and then view it afterwards. Scale data is just a bunch of timestamps and weight values. Simple, yes, but it felt like something that might take a surprising amount of space to save. So I did some napkin math: 1 scale session / day 10 minutes / session 10 readings / second = 2.19M readings / year 1 reading = 1 date + 1 weight = 1 uint64 + 1 float32 = 12 bytes 2.19M * 12B = 26 MB 26 MB per year is small by most measures. However in my case I keep a few extra copies of my app’s data around as backups so this is more like ~100MB/year. It’s also 40x the size of what I’m saving currently! This puts my app in danger of landing on the one Top Apps list I would not be stoked to be featured on: [iCloud storage usage] So let’s avoid that. At a high-level I see two options: Save less. 10 scale readings/second is probably more granularity than we’ll ever need. So we could just not save some of them. Of course if I’m wrong about that, they’re gone forever and then we’ll be out of luck. Save smaller. Looking at some example data, there are a lot of plateaus where the same value repeats over and over. That seems like it could compress well. [Example brewing session time series] Picking Ways to Compress This is my first rodeo with compression. I’m starting from basics like “compression makes big things small” and “double click to unzip”. Doing a little research seems like a good idea and it pays My scale data is technically “time series data” and it turns out we are not the first to want to compress it. There is a whole family of algorithms designed specifically for time series. This blog post is a great deep dive, but for our purposes today we’ll be looking at two of the algorithms it mentions: • simple-8b which compresses sequences of integers • Gorilla which compresses both integers as well as floating point numbers Algorithms designed for exactly my problem space sound ideal. However something else catches my eye in a comment about the same blog post: rklaehn on May 15, 2022 I have found that a very good approach is to apply some very simple transformations such as delta encoding of timestamps, and then letting a good standard compression algorithm such as zstd or deflate take care of the rest. Using a general purpose algorithm is quite intriguing! One thing I’ve noticed is that there are no Swift implementations for simple-8b or Gorilla. This means I would have to wrap an existing implementation (a real hassle) or write a Swift one (risky, I would probably mess it up). General purpose algorithms are much more common and side-step both of those issues. So we’ll look at both. For simplicity I’ll call simple-8b and Gorilla the “specialist algorithms” and everything else “generalist”. Evaluating the Specialist Algorithms Starting with the specialists seems logical. I expect they will perform better which will give us a nice baseline for comparison. But first we need to smooth out a few wrinkles. While wiring up an open-source simple-8b implementation I realize that it requires integers and both our timestamp and weight are floating point numbers. To solve this we’ll truncate to milliseconds and milligrams. A honey bee can flap its wings in 5 ms. A grain of salt is approximately 1mg. Both of these feel way more precise than necessary but better to err on that side anyways. 49.0335097 seconds 17.509999999999998 grams 49033 milliseconds 17509 milligrams We’ll use this level of precision for all our tests except Gorilla, which is designed for floating point numbers. Negative Numbers Negative numbers show up semi-frequently in scale data because often when you pick something up off a scale it will drop below zero. Unfortunately for us simple-8b doesn’t like negative numbers. Why? Let’s take a little detour and look at how computers store numbers. They end up as sequences of 1s and 0s like: 0000000000010110 is 22 0000000001111011 is 123 0000000101011110 is 350 You’ll notice that these tend to have all their 1s all on the right. In fact, only very large numbers will have 1s on the left. simple-8b does something clever where it uses 4 of the leftmost spaces to store some 1s and 0s of its own. This is fine for us. We’re not storing huge numbers so those leftmost spaces will always be 0 in our data. Now let’s look at some negatives. 1111111111101010 is -22 1111111110000101 is -123 1111111010100010 is -350 This is not great, the left half is all 1s! Simple-8b has no way of knowing whether the leftmost 1 is something it put there or something we put there so it will refuse to even try to compress these. One solution for this is something called ZigZag encoding. If you look at the first few positive numbers, normally they’ll look like this: 0000000000000001 is 1 0000000000000010 is 2 0000000000000011 is 3 0000000000000100 is 4 ZigZag encoding interleaves the negative numbers in between so now these same 0/1 sequences take on a new meaning and zig zag between negative and positive: 0000000000000001 is -1 zig 0000000000000010 is 1 zag 0000000000000011 is -2 zig 0000000000000100 is 2 zag If we look at our negative numbers from earlier, we can see that this gets rid of our problematic left-side 1s. # Normal ZigZag -22 1111111111101010 0000000000101011 -123 1111111110000101 0000000011110101 -350 1111111010100010 0000001010111011 We only need this for simple-8b, but it can be used with other integer encodings too. Kinda cool! Technically we could run our tests now, but we’re going to do two more things to eke out a little extra shrinkage. First is delta encoding. The concept is simple: you replace each number in your data set with the difference (delta) from the previous value. Visually these already look smaller. Amusingly enough they actually are smaller. We’ll use this for all algorithms except Gorilla which does delta encoding for us. The second tweak relates to the ordering of our data. So far we’ve been talking about time series as pairs of (timestamp, mass) points. Both specialist algorithms require us to provide a single list of numbers. We have two choices to flatten our pairs: Choice 1: [first_timestamp, first_mass, second_timestamp, second_mass, …] Choice 2: [first_timestamp, second_timestamp, … last_timestamp, first_mass, second_mass, …] Choice 2 compresses better on all algorithms (generalist too) even when we apply it after delta encoding. Again, Gorilla does its own thing–are you seeing the trend? Specialist Results We’ve truncated and pre-encoded, so let’s see some results. Algorithm Ratio 1 Ratio 2 Ratio 3 Avg. Ratio Avg. MB/year simple-8b 6.92 5.4 7.18 6.5 4 gorilla 6.72 4.18 6.88 5.9 4.4 &RightTee; higher is better &LeftTee; lower is better I tested with three different types of scale recordings for a bit of variety, then backed out the MB/year from the average compression ratio. Going from 26 MB/year to under 5 is a great result! Now for the Generalist Ones Similar to the specialist algorithms, we have a few choices to make before we can run our tests on the generalists. For simplicity we’re going to format our data as CSV. This might seem a little odd but it has a few perks: • It’s human-readable which is nice for debugging. • It’s also fairly compact as far as text representations go. • Most languages have native libraries to make reading/writing CSVs easy. ^(alas, Swift does not) We’ll use delta encoding like above–it’d be silly not to. We could really stretch the definition of CSV and stack all of the timestamps on top of all the masses into a single column, but that sacrifices a bit of readability so we won’t. Picking Algorithms There are a lot of general purpose compression algorithms. One popular benchmark lists over 70! We’re going to pick just 5. They are: • zlib, LZMA, and LZFSE – these come built-in with iOS which makes my life easier. zlib and LZMA are also fairly common. • Zstandard (aka zstd) and Brotli – from Facebook and Google respectively, both companies with an interest in good compression Picking Levels We’ve narrowed it down from 70 to 5, but there’s another curveball. Unlike the specialist algorithms which have no configuration options, most generalist algorithms let you choose a level that trades off speed for better compression. You can compress fast or slow down to compress more. For simplicity (and so I don’t have to show you a table with 40+ rows) we are not going to test all 11 Brotli levels or all 20+ zstd levels. Instead we’re going to choose levels that run at about the same speed. Apple makes this easier for us since LZFSE has no level and iOS only has zlib 5 and LZMA 6. All we have to do is pick levels for Brotli and zstd from this chart. [Speed benchmarks for our 5 algorithms] We’ll use Brotli 4 and zstd 5 since those are in-line with the fastest iOS algorithm. This means that zlib and LZMA are slightly advantaged but we’ll keep that in mind. Generalist Results We’ve prepped our CSV and made all our choices, so let’s see some results. Algorithm Ratio 1 Ratio 2 Ratio 3 Avg. Ratio Avg. MB/year zlib 5 8.50 5.79 8.18 7.49 3.47 lzma 6 8.12 5.55 7.49 7.1 3.7 zstd 5 7.49 5.71 7.74 6.98 3.72 brotli 4 7.84 5.52 7.53 6.96 3.74 lzfse 7.49 5.36 7.12 6.7 3.8 &RightTee; higher is better &LeftTee; lower is better Wow! Everything is under 4MB. Coming from 26MB this is fantastic. Specialist v. Generalist I’ve plotted everything side-by-side: [MB/year by algorithm] Weirdly, the generalist algorithms universally beat the specialists. On top of that, you’ll recall we picked generalist levels that were fairly fast. So we can actually widen the gap if we’re willing to compress slower. That feels like cheating, but doing the single column CSV doesn’t. Plus I’m really curious about that, so here it is: [MB/year by algorithm including single column CSV results] Seems like if you’re not a CSV purist you can squeeze an extra 400KB or so. Not bad. What Gives? It really does not make sense to me that the generalist algorithms come out on top. It’s possible I made a mistake somewhere. To check this, I look to see if every compressed time series can be reversed back to the original scale time series. They all can. My second guess is that maybe my time series data is not well-suited for simple-8b and Gorilla. I saw mention that equally spaced timestamps are preferred and my data is anything but: timestamps deltas 1691685057323 n/a To see if this is the problem, I re-run the benchmarks and truncate timestamps to the nearest 0.01s, 0.1s and even 1s. This ensures that there is a finite sized set of delta values (101, 11 and 2 [Compression ratio by timestamp granularity] As expected this does improve the compression ratio of the specialist algorithms. But it also gives a similar boost to the generalist one. So it doesn’t explain the difference. I don’t have a third guess. Maybe it is real? Back to Where We Started This all started since I was anxious about inflating the size of my humble iOS app. Our baseline was adding 26MB of new data each year, which became ~100MB/year in iCloud. With a general purpose compression algorithm it looks like we can get these numbers down to ~4MB and ~16MB per year respectively. Much better. Any of the generalist algorithms would work. In my case using one of Apple’s built-ins is an easy choice: • It’s ~1 line of code to implement them. ^Plus a few lines to make a CSV. • Using Brotli or zstd would increase my app’s download size by 400-700 KB. Not a lot but avoiding it is nice. Try It at Home One thing we didn’t touch on is that the distribution of your data can impact how well the compression works. It’s possible these results won’t translate to your data. To help check that, I’ve put my benchmarking CLI tool and a speed-test macOS/iOS app up on GitHub here. If you can put your data in CSV format, you should be able to drop it in and try out all the algorithms mentioned in this post. If you do, let me know what sort of results you get! I'm curious to see more real-world data points.
{"url":"https://stephenpanaro.com/blog/time-series-compression","timestamp":"2024-11-04T05:49:21Z","content_type":"text/html","content_length":"34846","record_id":"<urn:uuid:eac2b715-4e8c-457b-94f7-e76d99213acf>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00553.warc.gz"}
Motivations for Sampling in Statistical Inference Sampling is very common in the society to model distributions and estimate variables. For a very common example, we randomly sample a large quantity of families to model the family income distribution and estimate the mean of family income. Here, family income is something that can be directly measured therefore estimating the mean of family income from the samples is not very interesting. What’s interesting is that if we want to model the family income distribution and there must be some latent variables or variables that are dependent on the latent variables that can hardly be directly observed from the samples, what the most likely values, i.e., the expected values, for these variables are. This process is sometimes called statistical inference. The result of statistical inference could be further used for estimating some additional variables. However, unfortunately determining the expected values for these variables during statistical inference is difficult if the model is non-trivial. In this blog post, I would like to discuss why determining the expected values for these variables is difficult and how to approximate the expected values for these variables by sampling. Statistical Inference Expected Values Suppose we have a multivariate joint distribution $P(X_1, X_2, \cdots, X_n)$ and the mathematical form of the multivariate joint distribution is known. We will use $X = \{X_1, X_2, \cdots, X_n\}$ in this article for brevity. Sometimes, we would like to compute the expected values of $\mathbb{E}_{P(X)} \big[f(X_1, X_2, \cdots, X_n)\big]$ for some function $f$. For example, we would like to compute the expected value of the mean of $X_1, X_2, \cdots, X_n$, i.e., $\mathbb{E}_{P(X)} \big[ \frac{1}{n} \sum_{i=1}^{n} X_i \big] $. Using the property of expected values, \mathbb{E}_{P(X)} \bigg[ \frac{1}{n} \sum_{i=1}^{n} X_i \bigg] &= \frac{1}{n} \sum_{i=1}^{n} \mathbb{E}_{P(X)} \big[ X_i \big] \\ &= \frac{1}{n} \sum_{i=1}^{n} \int_{x \in X_i}^{} P( X_i = x ) x dx \\ According to the property of total probability, P(X_i ) &= \int_{X_1} \int_{X_2} \cdots \int_{X_{i-1}} \int_{X_{i+1}} \cdots \int_{X_n} \\ & \quad P(X_1, X_2, \cdots, X_{i-1}, X_i, X_{i+1}, \cdots, X_n) \\ & \quad d X_1 d X_2 \cdots d X_{i-1} d X_{i+1} \cdots d X_n \\ &= \oint_{X - X_i} P(X_1, X_2, \cdots, X_{i-1}, X_i, X_{i+1}, \cdots, X_n = X_n) \prod_{j \neq i} d X_j Usually the form of $P(X_1, X_2, \cdots, X_n)$ is non-trivial, and this integral will immediately become intractable to derive if the number of variables $n$ becomes large. The reader might check such an example in variational inference. Now we are stuck. We could not compute the expected value of the mean of $X_1, X_2, \cdots, X_n$, even if the joint distribution is known, because we cannot derive the probability distribution $P (X_i)$ for each single variable $X_i$. Approximate By Sampling Although we cannot derive the probability distribution $P(X_i)$ and further the expected value of some of the functions of variables that we are interested, if we can get some multivariate samples from the distribution $P(X_1, X_2, \cdots, X_n)$, we can empirically determine the distribution of $P(X_i)$ and its statistics, such as the expected value, from the samples. Going back to the example of computing the expected value of the mean of the variables, if we can get $N$ samples, $x^{(1)}, x^{(2)}, \cdots, x^{(N)}$ where $x^{(i)} = \{x_1^{(i)}, x_2^{(i)}, \cdots, x_n^{(i)}\}$, from $P(X_1, X_2, \cdots, X_n)$, because the law of large numbers, the mean of the variables could be estimated as the mean of the mean of the variables in the samples. Concretely, \mathbb{E}_{P(X)} \bigg[ \frac{1}{n} \sum_{i=1}^{n} X_i \bigg] &= \frac{1}{n} \sum_{i=1}^{n} \mathbb{E}_{P(X)} \big[ X_i \big] \\ &\approx \frac{1}{n} \sum_{i=1}^{n} \frac{1}{N} \sum_{j=1}^{N} x_i^{(j)} \\ &= \frac{1}{nN} \sum_{i=1}^{n} \sum_{j=1}^{N} x_i^{(j)} \\ More generally, if we are interested in the expected value of a random variable $f(X)$ and the (posterior) distribution $P(X)$, given $N$ samples, $x^{(1)}, x^{(2)}, \cdots, x^{(N)}$, we could estimate expected value of a random variable as \mathbb{E}_{P(X)} \approx \frac{1}{N} \sum_{i=1}^{N} f(x^{(i)}) Sampling Methods Now the most important question becomes how to generate samples from a complex multivariate joint distribution. We could imagine, without special methods, we could not even write a program to do samplings from a “simple” multivariate Gaussian distribution easily. There are sampling methods such as Gibbs sampling and Metropolis-Hastings algorithm to do a sequence of sampling from conditional univariate distributions that approximates the sampling from the multivariate joint distribution. I will write additional articles on these methods in the future. Sampling is critical for statistical inference, especially from a multivariate joint posterior distribution for latent variables. The samples could be used for estimate variables, approximate joint distributions or marginal distributions. How to do sampling from a univariate distribution using an existing uniform sampler? A simple approach is to derive the cumulative distribution function (CDF) $F_X(X = x) = P(X \leq x) = y \in [0, 1]$ for the univariate distribution $P(X)$, if CDF does exist. The inverse CDF is $F_X^ {-1}(Y)$. Then with the uniform sampler $Y \sim U[ 0, 1 ]$, $F_X^{-1}(Y)$ has the same distribution as $P(X)$. F_Y(y) &= P(Y \leq y) \\ &= P(F_X(x) \leq y) \\ &= P(X \leq F_X^{-1}(y)) \\ &= F_X(F_X^{-1}(y)) \\ &= y P(Y = y) &= \frac{dF_Y}{d_Y}(y) \\ &= 1 \\ Therefore, for any distribution $P(X)$, $F_X(X) = Y \sim U[0, 1]$. The inverse function $X = F_X^{-1}(Y)$ maps from variable $Y$ to $X$, so $F_X^{-1}(Y)$ has the same distribution as $X$. Why for some modern deep models, doing sampling is not very difficult during inference? Many models, such as image semantic segmentation models, are maximizing $P(Y_1, Y_2, \cdots, Y_n | X_1, X_2, \cdots, X_m) = \prod_{i=1}^{n} P(Y_i | X_1, X_2, \cdots, X_m)$ during training, and during inference sampling from each of the univariate distribution $P(Y_i | X_1, X_2, \cdots, X_m)$ is relatively simple. Some other models, such as recurrent neural networks for language modeling, are maximizing $P(X_1, X_2, \cdots, X_n) = P(X_1) P(X_2 | X_1) \cdots P(X_n | X_1, X_2, \cdots, X_{n-1})$ during training, and during inference sampling from each of the univariate distribution $P(X_i | X_1, X_2, \cdots, X_{i-1})$ is relatively simple.
{"url":"https://leimao.github.io/blog/Statistical-Sampling-Motivations/","timestamp":"2024-11-04T17:28:47Z","content_type":"text/html","content_length":"33719","record_id":"<urn:uuid:799b18a6-8fd0-4426-9341-ae7c3e57a982>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00794.warc.gz"}
Basics of Electronics Electronics is the branch of physics that deals with the electric effects on electronic components. In this chapter we are going to concentrate on modern digital electronics. Here are some basic definitions that you should know. • Current: It is flow of electrons, measured in Amperes. • AC (Alternating Current): An electric current that flows in one direction steadily. • DC (Direct Current): An electric current that flows in both direction in cyclic manner. • Frequency: The number of signal cycles completed in one second is called as frequency, measured in Hertz. • Voltage: The electric force that produces a flow of electricity in a circuit, expressed in Volts. • Resistance: It is the opposition to the flow of current, measured in ohms O. • Capacitance: It is capacity to store an electric charge, measured in Farads. • Inductance: An electrical phenomenon whereby an Electro Motive Force (EMF) is generated in a closed circuit by a change in the flow of current. • Decibel: A logarithmic unit of sound intensity. Following are the points which are covered in this chapter. • Components • Measuring instruments • Number system • Binary number system • Decimal number system • Octal number system • Hexadecimal number system • Codes • Logic gates • Flip-flops • Multiplexer and Demultiplexer • Adders Tip Box Did you know, why the middle pin in three pin socket is thicker. As leakage current gives shocks to an individual, it needs to flow through the earthing wire, and resisitance should be as low compared to that of the phase and neutral wires. Hence larger cross sectional area reduces the resistance and provides a easy path for the current to flow. So the middle pin is always thicker than the other two pins in a three pin socket. CodesandTutorials © 2014 All Rights Reserved
{"url":"https://codesandtutorials.com/hardware/electronics/","timestamp":"2024-11-05T10:33:12Z","content_type":"text/html","content_length":"44577","record_id":"<urn:uuid:29b1b1c2-2531-48c6-95dc-40b3cfcbbe8d>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00063.warc.gz"}
4.1: Basic Notions of Set Theory Last updated Page ID \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\) \( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\) \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\) \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vectorC}[1]{\textbf{#1}} \) \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \) \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \) \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \) \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\ evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\ newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y} \) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real} {\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec} [3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array} {r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\ wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\ newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var} {\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\ bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\ widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\) In modern mathematics, there is an area called Category theory^1 which studies the relationships between different areas of mathematics. More precisely, the founders of category theory noticed that essentially the same theorems and proofs could be found in many different mathematical fields – with only the names of the structures involved changed. In this sort of situation, one can make what is known as a categorical argument in which one proves the desired result in the abstract, without reference to the details of any particular field. In effect this allows one to prove many theorems at once – all you need to convert an abstract categorical proof into a concrete one relevant to a particular area is a sort of key or lexicon to provide the correct names for things. Now, category theory probably shouldn’t really be studied until you have a background that includes enough different fields that you can make sense of their categorical correspondences. Also, there are a good many mathematicians who deride category theory as “abstract nonsense.” But, as someone interested in developing a facility with proofs, you should be on the lookout for categorical correspondences. If you ever hear yourself utter something like “well, the proof of that goes just like the proof of the (insert weird technical-sounding name here) theorem” you are probably noticing a categorical Okay, so category theory won’t be of much use to you until much later in your mathematical career (if at all), and one could argue that it doesn’t really save that much effort. Why not just do two or three different proofs instead of learning a whole new field so we can combine them into one? Nevertheless, category theory is being mentioned here at the beginning of the chapter on sets. Why? We are about to see our first example of a categorical correspondence. Logic and Set theory are different aspects of the same thing. To describe a set people often quote Kurt Gödel – “A set is a Many that allows itself to be thought of as a One.” (Note how the attempt at defining what is really an elemental, undefinable concept ends up sounding rather mystical.) A more practical approach is to think of a set as the collection of things that make some open sentence true.^2 [11] Recall that in Logic the atomic concepts were “true”, “false”, “sentence” and “statement.” In Set theory, they are “set”, “element” and “membership.” These concepts (more or less) correspond to one another. In most books, a set is denoted either using the letter \(M\) (which stands for the German word “menge”) or early alphabet capital roman letters – \(A\), \(B\), \(C\), et cetera. Here, we will often emphasize the connection between sets and open sentences in Logic by using a subscript notation. The set that corresponds to the open sentence \(P(x)\) will be denoted \(S_P\), we call \ (S_P\) the truth set of \(P(x)\). \(S_P = \{x | P(x)\}\) On the other hand, when we have a set given in the absence of any open sentence, we’ll be happy to use the early alphabet, capital roman letters convention – or frankly, any other letters we feel like! Whenever we have a set \(A\) given, it is easy to state a logical open sentence that would correspond to it. The membership question: \(M_A(x) =\) “Is \(x\) in the set \(A\)?” Or, more succinctly, \(M_A(x) = “x ∈ A”\). Thus the atomic concept “true” from Logic corresponds to the answer “yes” to the membership question in Set theory (and of course “false” corresponds to “no”). There are many interesting foundational issues which we are going to sidestep in our current development of Set theory. For instance, recall that in Logic we always worked inside some “universe of discourse.” As a consequence of the approach we are taking now, all of our set theoretic work will be done within some unknown “universal” set. Attempts at specifying (a priori) a universal set for doing mathematics within are doomed to failure. In the early days of the twentieth century they attempted to at least get Set theory itself on a firm footing by defining the universal set to be “the set of all sets” – an innocuous sounding idea that had funny consequences (we’ll investigate this in Section 4.5). In Logic we had “sentences” and “statements,” the latter were distinguished as having definite truth values. The corresponding thing in Set theory is that sets have the property that we can always tell whether a given object is or is not in them. If it ever becomes necessary to talk about “sets” where we’re not really sure what’s in them we’ll use the term collection. You should think of a set as being an unordered collection of things, thus \(\{\text{popover }, 1, \text{ froggy}\}\) and \(\{1, \text{ froggy}, \text{ popover}\}\) are two ways to represent the same set. Also, a set either contains, or doesn’t contain, a given element. It doesn’t make sense to have an element in a set multiple times. By convention, if an element is listed more than once when a set is listed we ignore the repetitions. So, the sets \(\{1, 1\}\) and \(\{1\}\) are really the same thing. If the notion of a set containing multiple instances of its elements is needed there is a concept known as a multiset that is studied in Combinatorics. In a multiset, each element is preceded by a so-called repetition number which may be the special symbol \(∞\) (indicating an unlimited number of repetitions). The multiset concept is useful when studying puzzles like “How many ways can the letters of MISSISSIPPI be rearranged?” because the letters in MISSISSIPPI can be expressed as the multiset \(\{1 · M, 4 · I, 2 · P, 4 · S\}\). With the exception of the following exercise, in the remainder of this chapter we will only be concerned with sets, never multisets (Not for the timid!) How many ways can the letters of MISSISSIPPI be arranged? If a computer scientist were seeking a data structure to implement the notion of “set,” he’d want a sorted list where repetitions of an entry were somehow disallowed. We’ve already noted that a set should be thought of as an unordered collection, and yet it’s been asserted that a sorted list would be the right vehicle for representing a set on a computer. Why? One reason is that we’d like to be able to tell (quickly) whether two sets are the same or not. If the elements have been presorted it’s easier. Consider the difficulty in deciding whether the following two sets are equal. \(S_1 = \{♠, 1, e, π, ♦, A, Ω, h, ⊕, \epsilon\}\) \(S_2 = \{A, 1, \epsilon, π, e, s, ⊕, ♠, Ω, ♦\}\) If instead we compare them after they’ve been sorted, the job is much easier. \(S_1 = \{1, A, ♦, e, \epsilon, h, Ω, ⊕, π, ♠\}\) \(S_2 = \{1, A, ♦, e, \epsilon, Ω, ⊕, π, s, ♠\}\) This business about ordered versus unordered comes up fairly often so it’s worth investing a few moments to figure out how it works. If a collection of things that is inherently unordered is handed to us we generally put them in an order that is pleasing to us. Consider receiving five cards from the dealer in a card game, or extracting seven letters from the bag in a game of Scrabble. If, on the other hand, we receive a collection where order is important we certainly may not rearrange them. Imagine someone receiving the telephone number of an attractive other but writing it down with the digits sorted in increasing order! Consider a universe consisting of just the first \(5\) natural numbers \(U = \{1, 2, 3, 4, 5\}\). How many different sets having \(4\) elements are there in this universe? How many different ordered collections of \(4\) elements are there? The last exercise suggests an interesting question. If you have a universal set of some fixed (finite) size, how many different sets are there? Obviously, you can’t have any more elements in a set than are in your universe. What’s the smallest possible size for a set? Many people would answer \(1\) – which isn’t unreasonable! – after all a set is supposed to be a collection of things, and is it really possible to have a collection with nothing in it? The standard answer is \(0\) however, mostly because it makes a certain counting formula work out nicely. A set with one element is known as a singleton set (note the use of the indefinite article). A set with no elements is known as the empty set (note the definite article). There are as many singletons as there are elements in your universe. They aren’t the same though, for example \(1 \neq \{1\}\). There is only one empty set and it is denoted \(∅\) – irrespective of the universe we are working in. Let’s have a look at a small example. Suppose we have a universal set with \(3\) elements, without loss of generality, \(\{1, 2, 3\}\). It’s possible to construct a set, whose elements are all the possible sets in this universe. This set is known as the power set of the universal set. Indeed, we can construct the power set of any set \(A\) and we denote it with the symbol \(\mathcal{P}(A)\). Returning to our example we have \(\mathcal{P}(\{1, 2, 3\}) = \big\{ ∅, \{1\}, \{2\}, \{3\}, \{1, 2\}, \{1, 3\}, \{2, 3\}, \{1, 2, 3\} \big\}.\) Find the power sets \(\mathcal{P}(\{1, 2\})\) and \(P(\{1, 2, 3, 4\})\). Conjecture a formula for the number of elements (these are, of course, sets) in \(\mathcal{P}(\{1, 2, . . . n\})\). Hint: If your conjectured formula is correct you should see why these sets are named as they are. One last thing before we end this section. The size (a.k.a. cardinality) of a set is just the number of elements in it. We use the very same symbol for cardinality as we do for the absolute value of a numerical entity. There should really never be any confusion. If \(A\) is a set then \(|A|\) means that we should count how many things are in \(A\). If \(A\) isn’t a set then we are talking about the ordinary absolute value What is the power set of \(∅\)? If you got the last exercise in the chapter you’d know that this power set has \(2^0 = 1\) element. Try iterating the power set operator. What is \(\mathcal{P}(\mathcal{P}(∅))\)? What is \(\mathcal{P}(\mathcal{P}(\mathcal{P}(∅)))\)? Determine the following cardinalities. 1. \(A = \{1, 2, \{3, 4, 5\}\} |A| = \underline{\;\;\;\;\;\;\;\;\;\;\;} \) 2. \(B = \{\{1, 2, 3, 4, 5\}\} |B| = \underline{\;\;\;\;\;\;\;\;\;\;\;} \) What, in Logic, corresponds the notion \(∅\) in Set theory? What, in Set theory, corresponds to the notion \(t\) (a tautology) in Logic? What is the truth set of the proposition \(P(x) =\) “\(3\) divides \(x\) and \(2\) divides \(x\)”? Find a logical open sentence such that \(\{0, 1, 4, 9, . . .\}\) is its truth set. How many singleton sets are there in the power set of \(\{a, b, c, d, e\}\)? “Doubleton” sets? How many \(8\) element subsets are there in \(P(\{a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p\})?\) How many singleton sets are there in the power set of \(\{1, 2, 3, . . . n\}\)?
{"url":"https://math.libretexts.org/Bookshelves/Mathematical_Logic_and_Proof/Gentle_Introduction_to_the_Art_of_Mathematics_(Fields)/04%3A_Sets/4.01%3A_Basic_Notions_of_Set_Theory","timestamp":"2024-11-09T17:27:07Z","content_type":"text/html","content_length":"144686","record_id":"<urn:uuid:71372a1a-cf8c-4ef6-a334-259f0e18c011>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00046.warc.gz"}
Racial Segregation in the Bay Area Jonathan Osler In The Classroom Questions and content to explore with students might include: • What is "median household income"? Why is the median used instead of a different type of averaging? • What does "inter-municipal divergence" mean? How did the authors calculate those values? Why are they all below 1? Why sort these values from largest to smallest? • Can a city that is highly diverse also be segregated? How? Why? • Compare the median HHI with different racial compositions by city. How strong do these racial groups correlated with the HHI? • What is the relationship, if any, between the percent of a city that is white (or another race) to it's ranking in this list? • Choose some of this data to represent in graph form. What are the key takeaways from your graph? How Would You Use This News? How would you use this graph in your classroom? What math content could it help students explore? Share your ideas in the comments. 4 Comments Dec 01, 2022 This is an interesting and important math problem to discuss in a classroom, but I ran into a couple obstacles that would make teaching this difficult: firstly that HHI is never defined, and the expectation is that every reader will instinctively know what that is; this creates a language barrier for anyone who struggles with language and literacy. Secondly, you ask what the definition of Inter Municipal Divergence is, and how it is calculated, but after reading the linked article/study I couldn't find that calculation anywhere. Perhaps the answers seem straightforward to you, but they are not to everyone. George Delgado Dec 01, 2022 Very interesting data but confusing as to how you got calculations. I would love to share this with high school students but there doesn't seem to be any explanation for how you calculated "inter-municipal divergence". Steve McClure Dec 01, 2022 This is fascinating that there is such a great wealth gap between locations within the Bay Area, and how diversity is and is not an affect of the gap Austin McKinzie Dec 01, 2022 Hey RadicalMath, great graph and topic and guiding questions! My classmate and I had a great time diving in and off of the titled topic. We were really interested in what correlations you found between race majority and medium income? We also wondered how employment rates between different racial groups my reflect or not the median income. We continued in the discussion that historically cities are both diverse and segregated as populations gather where there is similarity in culture, language, interests, etc. and that segregation isn't necessarily negative unless it takes away from another's opportunity and livelihood. We were a little confused on the use of divergence in the changing heat map, but enjoyed the conversation that stemmed from this…
{"url":"https://www.radicalmath.org/post/racial-segregation-in-the-bay-area","timestamp":"2024-11-12T00:01:23Z","content_type":"text/html","content_length":"1050039","record_id":"<urn:uuid:9e43f2c2-086d-497a-bacb-0433fa387e33>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00753.warc.gz"}
If y=sinxcosx then find out dxdy :-... | Filo Question asked by Filo student Not the question you're searching for? + Ask your question Video solutions (1) Learn from their 1-to-1 discussion with Filo tutors. 2 mins Uploaded on: 1/18/2024 Was this solution helpful? Found 2 tutors discussing this question Discuss this question LIVE for FREE 14 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Practice more questions on Trigonometry View more Students who ask this question also asked View more Stuck on the question or explanation? Connect with our Mathematics tutors online and get step by step solution of this question. 231 students are taking LIVE classes Question Text If then find out :- Updated On Jan 18, 2024 Topic Trigonometry Subject Mathematics Class Class 11 Answer Type Video solution: 1 Upvotes 115 Avg. Video Duration 2 min
{"url":"https://askfilo.com/user-question-answers-mathematics/if-then-find-out-36373235383735","timestamp":"2024-11-07T19:39:19Z","content_type":"text/html","content_length":"311787","record_id":"<urn:uuid:17b54be8-4d42-4cab-ad16-9eee4a5f6c03>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00215.warc.gz"}
Spatiotemporal functional permutation tests for comparing observed climate behavior to climate model projections Articles | Volume 10, issue 2 © Author(s) 2024. This work is distributed under the Creative Commons Attribution 4.0 License. Spatiotemporal functional permutation tests for comparing observed climate behavior to climate model projections Comparisons of observed and modeled climate behavior often focus on central tendencies, which overlook other important distributional characteristics related to quantiles and variability. We propose two permutation procedures, standard and stratified, for assessing the accuracy of climate models. Both procedures eliminate the need to model cross-correlations in the data, encouraging their application in a variety of contexts. By making only slightly stronger assumptions, the stratified procedure dramatically strengthens the ability to detect a difference in the distribution of observed and climate model data. The proposed procedures allow researchers to identify potential model deficiencies over space and time for a variety of distributional characteristics, providing a more comprehensive assessment of climate model accuracy, which will hopefully lead to further model refinements. The proposed statistical methodology is applied to temperature data generated by the state-of-the-art North American Coordinated Regional Climate Downscaling Experiment (NA-CORDEX). Received: 15 Nov 2023 – Revised: 13 Jul 2024 – Accepted: 28 Jul 2024 – Published: 02 Oct 2024 This paper is concerned with the comparison of predictions of commonly used climate models to actual (reanalysis) climate data. Phillips (1956) is credited with introducing the first successful climate model. In the subsequent 60+ years, climate models have grown increasingly more complex and popular for describing climate behavior, particularly for exploring the potential impact of humans on future climate. Projections of future climate are typically based on general circulation models (GCMs), which are sometimes referred to as global climate models. One way to assess the reliability of a climate model is to examine whether the model is able to reproduce climate behavior observed in the past (Raäisaänen, 2007; Randall et al., 2007; IPCC, 2014; Garrett et al., 2023). The deficiencies a model has in describing observed climate are likely to be amplified in the future and may weaken their usefulness in making decisions based on the available data. Many comparisons have been made between climate model projections of current climate and historical records. Lee et al. (2019) compare mean near-surface air temperature and precipitation decadal trends from climate models to historical trends over the continental USA. Jia et al. (2019) compute statistics over the Tibetan Plateau (TP) for observational data and various climate models, including the sample mean, standard deviation, root-mean-square error, and time- and space-based correlation coefficients, to assess the accuracy of climate models in describing the behavior of observed data. Kamworapan and Surussavadee (2019) compare 40 GCMs to various observational and reanalysis data sets using 19 performance metrics, including mean annual temperature and precipitation, mean seasonal cycle amplitude of temperature and precipitation, the correlation coefficient between simulated and observed mean temperatures and precipitation, and variance of annual average temperature and precipitation over a 99-year period. Oh et al. (2023) compare the root-mean-square difference (RMSD) and Taylor skill score statistics of 17 climate models with observational data for several variables and two ocean areas. These approaches tend to focus on average behavior over certain time periods and partitions of the study area. They may not be adequate in describing more detailed aspects of behavior over time and space. Methods for evaluating climate models from a functional data perspective are less common. Vissio et al. (2020) proposed using a Wasserstein distance to measure the gap between a climate model and a reference distribution based on raw observations or reanalysis data. Their goal was to rank the “nearness” of various climate model outputs to a reference data set. Garrett et al. (2023) extended this idea to correct for “misalignment” of the time periods of the data sets being compared. We cast the problem of evaluating the agreement of a model with historical records into a statistical testing problem within the framework of functional data analysis. In particular, since we consider whole curves rather than averages, our approach allows us to detect shifts in the seasonal cycle in the GCMs that do not match the observed data. In contrast to the other functional approaches, which seem to focus on ranking the similarity of individual climate model outputs to a reference data set, our goal is to assess whether a reference data set can be viewed as a realization from a hypothetical climate distribution that produced the collection of climate model outputs. In what follows, we (i) describe a novel testing procedure for spatiotemporal functional data and (ii) provide a complementary case study that expands the types of climate characteristics considered in order to provide a more comprehensive evaluation of how a climate model output compares to observed climate data (or, in our case, a close proxy). We will compare daily temperature data for the fifth generation of the European Centre for Medium-Range Weather Forecasts (Copernicus Climate Change Service (C3S), 2017) to a climate model output provided by the North American Coordinated Regional Climate Downscaling Experiment (NA-CORDEX; Mearns et al., 2017). For convenience, we will refer to these data sets as the ERA5 data and NA-CORDEX data, respectively. Our goal is to assess how well NA-CORDEX climate model projections capture the behavior of the observed climate, as characterized by the ERA5 reanalysis data. In Sect. 2, we describe these data sets in more detail as they directly motivate our methodology. In Sect. 3, we describe the statistical approach for comparing the climate models. In Sect. 4, we perform a simulation study that highlights the benefits of the proposed method over the standard procedure. In Sect. 5, we describe the results of our climate model comparisons. Lastly, we summarize our investigation in Sect. 6. 2Description of the ERA5 and NA-CORDEX data 2.1General information about climate reanalysis data A climate reanalysis feeds large amounts of observational data into data assimilation models to provide a numerical summary of recent climate across much or all of the Earth at regular spatial resolutions and time steps (Dee et al., 2016; European Centre for Medium-Range Weather Forecasts, 2023b). Typically, all available observational data are fed into the data assimilation algorithm at regular hourly intervals (e.g., every 6–12h) to estimate the state of the climate at each time step (Dee et al., 2016). The resulting data product typically provides information on numerous climate variables such as surface air temperature, total precipitation, and wind speed. A climate reanalysis data product is much more manageable from a research standpoint, since the product is uniform and since the researcher does not need to access the many observational data sets or immense computational resources needed to produce the data (Dee et al., 2016). However, Dee et al. (2016) point out that because climate reanalysis data use many different types of data from different sources, locations, and times, this can result in uncertainty in the estimated climate at each time step and lead to phantom data patterns. 2.2ERA5 background information The ERA5 global reanalysis is the fifth-generation reanalysis produced by the European Centre for Medium-Range Weather Forecasts (Hersbach et al., 2017, 2020a) and is made freely available through the Copernicus Climate Change Service Climate Data Store (Copernicus Climate Change Service (C3S), 2017). The reanalysis data are available from January 1940 to approximately the present day. The data assimilate 74 data sources using a 4D‐Var (four-dimensional variational) ensemble data assimilation system (European Centre for Medium-Range Weather Forecasts, 2023a; Hersbach et al., 2020a). The program produces many atmospheric, land, and oceanic climate variables. Our analysis focuses on the monthly average daily maximum near-surface air temperature from the ERA5 hourly data on single levels from 1940 to the present (Hersbach et al., 2020b). Near-surface temperature is the 2m temperature, which is effectively the temperature that humans experience. 2.3NA-CORDEX background information NA-CORDEX is focused on downscaling the climate model output in the North American domain using boundary conditions from the CMIP5 archive (Hurrell et al., 2011). It is part of the broader CORDEX organized by the World Climate Research Programme, which aims to organize regional climate downscaling through partnerships with research groups across the globe (CORDEX, 2020). Figure 1a displays the NA-CORDEX domain with the associated geopolitical boundaries. A regional climate model (RCM) receives boundary conditions from a GCM and predicts (downscales) the resulting climate behavior at a finer spatial scale than the associated GCM, which allows researchers to investigate climate behavior at smaller spatial resolutions (e.g., at the city level instead of county level). Climate model outputs typically exhibit bias compared to observations. This bias can be corrected using various algorithms by adjusting the data using a reference data set. The NA-CORDEX program used the multivariate bias correction (MBCn) algorithm (Cannon et al., 2015; Cannon, 2016, 2018) to perform this correction. MBCn uses quantile mapping to adjust the statistical distribution of the model output to match the distribution of a reference data set. MBCn makes this adjustment jointly for multiple variables and not only for individual marginal distributions, which is important because climate models produce many variables simultaneously. The NA-CORDEX program bias-corrected the original climate model output using two observational data sets, i.e., Daymet and gridMET. Additional details are discussed by McGinnis and Mearns (2021). Figure 1 displays heat maps of the average temperature (°C) of the raw NA-CORDEX data, as well as the bias added to the raw data for both reference data sets. The Daymet data set includes substantially more spatial locations. All subsequent analyses will be performed on bias-corrected data over the corresponding domain. The NA-CORDEX utilizes combinations of six different GCMs to provide the boundary conditions for seven different RCMs under two sets of future conditions, though not all combinations are currently available. Our analysis will focus on the monthly average of daily maximum near-surface air temperature from the available combinations. 2.4Details on comparing the ERA5 and NA-CORDEX data The ERA5 data are available at locations across the globe, while the NA-CORDEX data are available in the areas surrounding North America. Consequently, we restrict our use of the ERA5 data to the same subdomain as the NA-CORDEX data. Furthermore, we restrict our analysis to locations over the primary land masses around North America (i.e., not small islands or the sea), as response behavior can change dramatically between land and sea and the spatial resolution may not allow for adequate representation of small land masses. Both ERA5 and NA-CORDEX data sets are available at a common spatial resolution known as 44i. The longitude and latitude locations of 44i data are available in 0.5° increments starting from ±0.25°. The “44” in 44i refers to the fact that locations separated 0.5° in longitude along the Equator are approximately 44mi (miles) (about 70.8km) apart. Additionally, since we have previously noted that variables such as precipitation should be used with extreme caution in the context of reanalysis data, we only consider temperature-related data since they are provided by both the ERA5 and NA-CORDEX programs. Lastly, our goal is to compare the observed climate to climate model projections. The historical period for the NA-CORDEX data runs from 1950–2005, while the reanalysis data we consider runs from 1940 to the present day. However, there are known issues with temperature in December 2005 for several NA-CORDEX models (NA-CORDEX, 2020), so we restrict our analysis to monthly temperature for the complete years 1950–2004. For the available data with the above characteristics, there is a single realization of the ERA5 data and 15 realizations of NA-CORDEX data (or, more specifically, 15 combinations of RCM–GCM models with available data). Six RCMs were used to produce the NA-CORDEX data: Additional details about the RCMs are provided in Table S1 in the Supplement. These RCMs were combined with eight versions of GCMs: Although there are 48 total RCM–GCM combinations possible, data were created for only 15 combinations. We summarize the 15 RCM–GCM combinations used to produce the data used in this analysis in Table 1. 3.1Testing context Our goal is to assess how well the NA-CORDEX climate model projections capture the behavior of observed climate using the ERA5 reanalysis data as a proxy. If the climate model projections provide an accurate representation of the observed data, then one can view the observed data as a realization from the same climate distribution producing the NA-CORDEX climate model projections. This will be formalized as the null hypothesis, keeping in mind the usual caveats of the Neyman–Pearson paradigm. The alternative hypothesis will be that the observed data follow a different distribution than the model data. In what follows, we formally describe the problem using appropriate mathematical notation and propose a statistical methodology for making an inference. The data we consider are viewed as realizations of annual spatiotemporal random fields, i.e., $\mathit{\left\{}{X}_{n}\left(\mathbit{s},t\right),\mathbit{s}\in D,t\in T\mathit{\right\}}$, where n denotes year, s spatial location, and t time within the year. The spatial domain D⊂ℝ^2 is assumed to be a known, bounded region. Both the spatial domain and time domain can be continuous, but the data are observed on discrete grids, both in space and time, which will be defined in the following. The year domain $\mathcal{N}\subset {\mathbb{Z}}^{+}$ is assumed to be a known, fixed set of positive integer values. We use the shorthand X[n] to denote the spatiotemporal random field in year n, and we use F[n] to denote its distribution function. Similarly, we use $X=\mathit{\left\{}{X}_ {n},n\in \mathcal{N}\mathit{\right\}}$ to denote the set of the annual random fields for all years in 𝒩, and we use F to denote the corresponding distribution function. Consider first a fixed year n. The range of the function F[n] is [0,1], and its domain is the set of real-valued functions on D×T, which is denoted ℱ(D×T). Thus, ${F}_{n}:\mathcal{F}\left(D×T\right)\to \left[\mathrm{0},\mathrm{1}\ right]$. For $f\in \mathcal{F}\left(D×T\right)$, $\begin{array}{rl}& P\left({X}_{n}\left(\mathbit{s},t\right)\le f\left(\mathbit{s},t\right),\mathbit{s}\in D,t\in T\right)\\ & \phantom{\rule{0.25em}{0ex}}={F}_{n}\left(f\left(\mathbit{s},t\right),\ mathbit{s}\in D,t\in T\right).\end{array}$ For mathematical consistency, ℱ(D×T) must be a subset of a suitable Hilbert space of measurable functions (Horváth and Kokoszka, 2012, Chap. 2). In our context, we consider two functions for F[n], both unknown. The first one, denoted ${F}_{n}^{\mathrm{R}}$, is the distribution function corresponding to real climate, which is represented by the reanalysis data. Thus, the superscript R can be associated with both “real” and “reanalysis”. We observe only one realization from the distribution ${F}_{n}^{\mathrm{R}}$. The second distribution function, denoted ${F}_{n}^{\mathrm{M}}$, describes data generated by the climate model for year n. We generally have a large number of realizations from this distribution. We say that the model describes real data in year n satisfactorily if we cannot reject the hypothesis ${F}_{n}^{\mathrm{R}}={F}_{n}^{\mathrm{M}}$. To evaluate the model over all available years, we work with distribution functions F defined by $\begin{array}{}\text{(1)}& \begin{array}{rl}& P\left({X}_{n}\left(\mathbit{s},t\right)\le f\left(\mathbit{s},t,n\right),\phantom{\rule{0.33em}{0ex}}\mathbit{s}\in D,t\in T,n\in \mathcal{N}\right)\\ & \phantom{\rule{0.25em}{0ex}}=F\left(f\left(\mathbit{s},t,n\right),\phantom{\rule{0.33em}{0ex}}\mathbit{s}\in D,t\in T,n\in \mathcal{N}\right),\end{array}\end{array}$ where f is a real function over $D×T×\mathcal{N}$. We would ideally consider testing $\begin{array}{}\text{(2)}& {H}_{\mathrm{0}}:{F}^{\mathrm{R}}={F}^{\mathrm{M}}\phantom{\rule{1em}{0ex}}\text{versus}\phantom{\rule{1em}{0ex}}{H}_{a}:{F}^{\mathrm{R}}e {F}^{\mathrm{M}}.\end{array}$ Effectively, this is assessing whether X^R could plausibly be viewed as a realization from F^M. Before describing our approach, we formulate the assumptions we will use. These assumptions must, on the one hand, be realistic but, on the other hand, lead to feasible tests. We use the notation M[j ] to refer to the jth GCM–RCM combination from the NA-CORDEX program, with $j=\mathrm{1},\mathrm{2},\mathrm{\dots },{N}_{M}$, and we use ${X}^{{M}_{j}}$ to denote the spatiotemporal field associated with climate model M[j]. In our analysis, N[M]=15. We assume that ${X}^{{\mathrm{M}}_{j}}\stackrel{\text{i.i.d.}}{\sim }{F}^{\mathrm{M}}$, i.e., that the climate model realizations are independent and identically distributed (i.i.d.). The above assumption is realistic when the model runs use different initial parameters for each run or when model runs come from a different combination of models. The initial parameters are the initial conditions from which an RCM begins its model run. Since the RCMs are programmed differently, their outputs are independent of each other. If different RCMs use the same initial conditions, then the model outputs may present some autocorrelation since they are being forced by the same process. Thus, GCM–RCM combinations that share the same GCM may not satisfy the assumption of independent model runs. We will address this concern in the Supplement. Additionally, ${X}_{n}^{{\mathrm{M}}_{j}}\left(\mathbit{s},t\right)$ denotes the random variable observed at location s and time t in year n for model combination M[j]. To reflect real data like temperature or precipitation (or statistics derived from them), we cannot assume that the random field $\mathit{\left\{}{X}_{n}\left(\mathbit{s},t\right),s\in D,t\in T\mathit{\right\}}$ is stationary or Gaussian. We thus do not assume that the X[n] fields have the same distribution, so various trends or changes in n are allowed. Independence of errors is commonly assumed in statistical models, and we also assume independence of annual error functions around potential decadal trends or changes. We emphasize that we do not assume spatial or temporal independence within the fields; that is, we do not assume that X[n](s[1],t[1]) is independent of X[n](s[2],t[2]) if s[1]≠s[2] or t[1]≠t[2]. The annual fields are viewed as functional objects with a complex spatiotemporal dependence In the context of the ERA5 and NA-CORDEX data, it is reasonable to assume that realizations of X^R and ${X}^{{M}_{j}}$ are observed at identical spatial locations s, time points t, and years n, so we do so in what follows. Let $\mathcal{S}=\mathit{\left\{}{\mathbit{s}}_{\mathrm{1}},\mathrm{\dots },{\mathbit{s}}_{{N}_{\mathbit{s}}}\mathit{\right\}}\subset D$ be the set of observed spatial locations, $\mathcal{T}=\mathit{\left\{}{t}_{\mathrm{1}},{t}_{\mathrm{2}},\mathrm{\dots },{t}_{{N}_{t}}\mathit{\right\}}\subset T$ denote the set of observed time points, and $\mathcal{N}=\mathit{\ left\{}{n}_{\mathrm{1}},{n}_{\mathrm{2}},\mathrm{\dots },{n}_{{N}_{n}}\mathit{\right\}}$ denote the set of observed years. 3.2Tests of equality of distributions We first consider a test assessing whether the distribution of the reanalysis data matches that of the model data. Formally, we consider the testing problem stated in Eq. (2). We construct the test statistic using the distance between the real and model data values. For a fixed location s and model M[j], set $\begin{array}{rl}& {D}_{R,{M}_{j}}\left(\mathbit{s}\right)=\frac{\mathrm{1}}{{N}_{n}}\frac{\mathrm{1}}{{N}_{t}}\sum _{n=\mathrm{1}}^{{N}_{n}}\sum _{t=\mathrm{1}}^{{N}_{t}}\left|{X}_{n}^{\mathrm{R}}\ left(\mathbit{s},t\right)-{X}_{n}^{{M}_{j}}\left(\mathbit{s},t\right)\right|,\\ & j=\mathrm{1},\mathrm{2},\mathrm{\dots },{N}_{M}.\end{array}$ The test statistic at location s is $\begin{array}{}\text{(3)}& \stackrel{\mathrm{^}}{T}\left(\mathbit{s}\right)=\frac{\mathrm{1}}{\sqrt{{N}_{M}}}\sum _{j=\mathrm{1}}^{{N}_{M}}{D}_{R,{M}_{j}}\left(\mathbit{s}\right).\end{array}$ We can avoid the problem of multiple testing by considering the distance over the whole space; that is, $\begin{array}{ll}{D}_{R,{M}_{j}}& =\frac{\mathrm{1}}{{N}_{s}}\frac{\mathrm{1}}{{N}_{n}}\frac{\mathrm{1}}{{N}_{t}}\sum _{\mathbit{s}\in \mathcal{S}}\sum _{n=\mathrm{1}}^{{N}_{n}}\sum _{t=\mathrm{1}}^ {{N}_{t}}\left|{X}_{n}^{\mathrm{R}}\left(\mathbit{s},t\right)-{X}_{n}^{{M}_{j}}\left(\mathbit{s},t\right)\right|,\\ & =\frac{\mathrm{1}}{{N}_{\mathbf{s}}}\sum _{\mathbit{s}\in \mathcal{S}}{D}_{R,{M}_ {j}}\left(\mathbf{s}\right),\phantom{\rule{0.25em}{0ex}}j=\mathrm{1},\mathrm{2},\mathrm{\dots },{N}_{M}.\end{array}$ The global test statistic is then $\begin{array}{}\text{(4)}& \stackrel{\mathrm{^}}{T}=\frac{\mathrm{1}}{\sqrt{{N}_{M}}}\sum _{j=\mathrm{1}}^{{N}_{M}}{D}_{R,{M}_{j}}.\end{array}$ We explain the approximation of the null distribution in Sect. 3.4.1 and 3.4.2. While the statistic in Eq. (4) solves the problem of multiple testing, the information that can be drawn from the test based on it is limited; if the null hypothesis is rejected, the test does not indicate over which spatial regions the differences occur and which characteristics contribute to them. These issues are addressed in the following sections. 3.3Distributional characteristics As noted above, testing the equality of distributions is useful, but such tests do not indicate how the distributions differ if the null hypothesis is rejected. We therefore also propose tests to assess whether certain characteristics of F^R (e.g., related to center, dispersion, skewness, and extremes) are consistent with the same characteristics of F^M. In this section, we define the characteristics we consider as population parameters and introduce their estimators. Recall that ${X}_{n}^{\mathrm{M}}\left(\mathbit{s},t\right)$ is the value of a scalar random field indexed by n,s, and t. For a fixed s, we have a scalar random field indexed by n and t. This random field has an expected value (a real number) which we denote by μ^M(s). We estimate it by ${\stackrel{\mathrm{^}}{\mathit{\mu }}}^{\mathrm{M}}\left(\mathbit{s}\right)=\frac{\mathrm{1}}{{N}_{M}{N}_{n}{N}_{t}}\sum _{j=\mathrm{1}}^{{N}_{M}}\sum _{n\in \mathcal{N}}\sum _{t\in \mathcal{T}}{X}_ The expected value μ^M(s) ignores any possible changes in the mean between years or within a year but varies with spatial location. Naturally, we could consider other central tendency characteristics such as the median; dispersion characteristics such as the standard deviation, interquartile range, or total range; or more extremal characteristics such as the 0.025 or 0.975 quantiles, though special care must be taken to ensure that these characteristics are well-defined. These characteristics may not be defined over space, time, or year domains because of trends or other factors. They are to be interpreted as parameters of the populations of all, infinitely many, model or real data replications for a fixed spatiotemporal domain. One may, for example, consider the 0.05 quantile function of the model distribution defined by $\begin{array}{rl}{q}_{\mathrm{0.05}}& =inf\mathit{\left\{}x\in \mathbb{R}:P\left({X}_{n}\left(\mathbit{s},t\right)\ge x,\mathbit{s}\in D,t\in T,\\ & n\in \mathcal{N}\right)\ge \mathrm{0.95}\mathit{\ Within the context described above, in addition to the hypotheses in Eq. (2), we may test more specific hypotheses of the general form $\begin{array}{}\text{(5)}& {H}_{\mathrm{0}}:{\mathit{\theta }}^{\mathrm{R}}={\mathit{\theta }}^{\mathrm{M}}\phantom{\rule{0.25em}{0ex}}\text{versus}\phantom{\rule{0.25em}{0ex}}{H}_{a}:{\mathit{\ theta }}^{\mathrm{R}}e {\mathit{\theta }}^{\mathrm{M}},\end{array}$ assuming that the parameter is well-defined, which allows us to assess the ways in which the distributions of F^M and F^R might differ. For example, while the means may be similar, the dispersion may differ. In Sect. 3.4, we discuss how the tests are practically implemented. 3.4Permutation tests In this section, we propose solutions to the testing problems in Eqs. (2) and (5). In Sect. 3.4.1, we explain how standard permutation tests can be applied, and we discuss their drawbacks. Section 3.4.2 focuses on an approach we propose to construct useful tests. 3.4.1Standard permutation tests First introduced by Fisher (1935), permutation tests are a popular approach for hypothesis tests comparing characteristics of two (or more) groups while requiring minimal distributional assumptions. In contrast to parametric tests, the weakened assumptions typically come at the expense of greater computational effort. Instead of assuming that the null distribution can be approximated by a parametric distribution, the null distribution is approximated using a resampling procedure. Specifically, the responses are permuted for all observations that are exchangeable under the null hypothesis, and a relevant test statistic quantifying the discrepancy between the relevant groups is computed for the permuted data. The null distribution is determined by considering the empirical distribution of the statistics computed for all possible permutations of the data (or approximated if a subset of all permutations is used). A statistical decision is made by comparing the observed statistic to the empirical distribution and quantifying the associated p value. Good (2006) provides details of the theory and practice of permutation tests. The use of permutation tests in the framework of functional data seems to have been introduced by Holmes et al. (1996) for comparing functional brain-mapping images and has been developed in many directions; see Nichols and Holmes (2002), Reiss et al. (2010), Corain et al. (2014), and Bugni and Horowitz (2021), among many others. These tests assume that the functions in two or more samples are i.i.d. or form i.i.d. regressor–response pairs. As explained above, this is not the case in the context of spatiotemporal functions we consider. We now elaborate on the potential application of permutation tests in our framework. Let $\mathbf{X}=\mathit{\left\{}{X}^{{\mathrm{R}}_{\mathrm{1}}},\mathrm{\dots },{X}^{{\mathrm{R}}_{{N}_{R}}},{X}^{{\mathrm{M}}_{\mathrm{1}}},\mathrm{\dots },{X}^{{\mathrm{M}}_{{N}_{M}}}\mathit{\right \}}$ denote the observed data, where (motivated by our application) the superscripts R and M denote responses from two different groups and where N[R] and N[M] denote the number of observations in each group. Let T(X) denote a statistic for assessing whether θ^R=θ^M. Let $T\left({\stackrel{\mathrm{̃}}{\mathbf{X}}}^{\left(\mathrm{1}\right)}\right),\mathrm{\dots },T\left({\stackrel{\mathrm{̃}}{\ mathbf{X}}}^{\left(B\right)}\right)$ denote the test statistics for all possible permutations of X under the null hypothesis. The null hypothesis is that the characteristics of interest in both samples are the same, so all $B=\left({N}_{R}+{N}_{M}\right)\mathrm{!}$ permutations can be used in general. The upper-tailed p value for this test would be $p=\frac{\mathrm{1}+{\sum }_{j=\mathrm{1}}^{B}I\left(T\left({\stackrel{\mathrm{̃}}{\mathbf{X}}}^{\left(j\right)}\right)\ge T\left(\mathbf{X}\right)\right)}{B+\mathrm{1}}.$ Although the standard permutation test can be used in a variety of testing contexts, including for functional data, it has limited utility in our present context because of the data structure. Specifically, since there is only a single realization of reanalysis data and there are 15 realizations of model data while there are 16! permutations of the indices, there are only 16 unique combinations of the data leading to different test statistics. For example, the sample mean of the model group will not change if the 15 models are permuted. Thus, even if testing at a significance level of 0.10, the test statistic for the observed data will have to be more extreme than every test statistic resulting from a data permutation in order to conclude statistical significance. This will lead to a severe lack of power for testing the equality of distributional characteristics from the reanalysis and climate model data. 3.4.2Stratified permutation tests In order to overcome the limitations of a standard permutation test in our present context, we propose a novel stratified permutation test for functional data. Matchett et al. (2015) introduced a general stratified permutation test to test whether rare stressors had an impact on certain animal species after controlling for certain covariates. Essentially, after classifying their data into different strata, Matchett et al. (2015) assumed that the responses within each stratum were exchangeable under the null hypothesis. This allowed for independent permutations of the data within each stratum, which could then be used to perform tests within or across strata. We propose a similar approach in the context of spatiotemporal functional data. Our particular context has a number of nuances that must be accounted for. We are quantifying distributional characteristics of our functional data across space over the course of an annual cycle over many years. Recalling our assumptions from Sect. 3.1, we allow the observations to be dependent across space and time within a year but assume that they are independent (but potentially non-stationary) between years. More precisely, we assume that the observations in year n follow the model ${X}_{n}\left(\mathbit{s},t\right)={\mathit{\mu }}_{n}\left(\mathbit{s},t\right)+{\mathit{\ epsilon }}_{n}\left(\mathbit{s},t\right)$, where t is time within a year. In our application, the variable t is a calendar month, $t=\mathrm{1},\mathrm{2},\mathrm{\dots },\mathrm{12}$, starting with January. The independence assumption means that the error surfaces, ${\mathit{\epsilon }}_{n}\left(\cdot ,\cdot \right)$, are independent and identically distributed across n. The mean surfaces, ${\ mathit{\mu }}_{n}\left(\cdot ,\cdot \right)$, look similar for each year n and dominate the shape of the observations (see Fig. 2). The validity of this assumption has been verified by computing sample cross-correlations and comparing them to critical values under the null hypothesis of white noise. We note that the division into years starting with January is arbitrary. The key is to use any interval that includes the whole year to account for the annual periodicity. We also note that we do not assume that the errors, say in January and July, have the same distribution or are independent. We only assume that the whole annual error curves are i.i.d. To preserve the spatial and temporal dependence between responses within a year, we permute whole spatiotemporal random fields within the same year across climate models instead of permuting data across space or time. A similar approach was proposed by Wilks (1997) in the context of bootstrapping spatial random fields, with a similar idea being used in the context of spatial time series in Dassanayake and French (2016). As eloquently described by Wilks, “Simultaneous application of the same resampling patterns to all dimensions of the data vectors will yield resampled statistics reflecting the cross-correlations in the underlying data, without the necessity of explicitly modeling those cross-correlations.” Directly applying this approach to the spatiotemporal random fields we consider would result in a functional version of the standard permutation test described in Sect. 3.4.1. We extend the standard test to a stratified version using years as strata. Since we assume the data are independent across years but are independent and identically distributed across models within a year under the null hypothesis, the random fields within a year are exchangeable under the null hypothesis. The advantage of this approach in our present testing context is as follows: instead of having only 16 effective permutations (i.e., unique combinations) with which to perform a test, we instead have ${\mathrm{16}}^{\mathrm{55}}>\mathrm{2}×{\mathrm{2}}^{\mathrm{31}}$ effective permutations. In practice, we implement the test using a large, random subset of the effective permutations to approximate the null distribution. To explain our methodology, we describe the stratified permutation test for functional data in more detail by assuming a fixed spatial location s and year n. For simplicity, we assume N[R]=1, N[M]=2, and N[t]=3. The data may be written as $\begin{array}{rl}{\mathbf{X}}_{n}\left(\mathbit{s}\right)& =\left[\begin{array}{c}{X}_{n}^{\mathrm{R}}\left(\mathbit{s}\right)\\ {X}_{n}^{{\mathrm{M}}_{\mathrm{1}}}\left(\mathbit{s}\right)\\ {X}_{n} ^{{\mathrm{M}}_{\mathrm{2}}}\left(\mathbit{s}\right)\end{array}\right]\\ & =\left[\begin{array}{ccc}{X}_{n}^{\mathrm{R}}\left(\mathbit{s},{t}_{\mathrm{1}}\right)& {X}_{n}^{\mathrm{R}}\left(\mathbit {s},{t}_{\mathrm{2}}\right)& {X}_{n}^{\mathrm{R}}\left(\mathbit{s},{t}_{\mathrm{3}}\right)\\ {X}_{n}^{{\mathrm{M}}_{\mathrm{1}}}\left(\mathbit{s},{t}_{\mathrm{1}}\right)& {X}_{n}^{{\mathrm{M}}_{\ mathrm{1}}}\left(\mathbit{s},{t}_{\mathrm{2}}\right)& {X}_{n}^{{\mathrm{M}}_{\mathrm{1}}}\left(\mathbit{s},{t}_{\mathrm{3}}\right)\\ {X}_{n}^{{\mathrm{M}}_{\mathrm{2}}}\left(\mathbit{s},{t}_{\mathrm {1}}\right)& {X}_{n}^{{\mathrm{M}}_{\mathrm{2}}}\left(\mathbit{s},{t}_{\mathrm{2}}\right)& {X}_{n}^{{\mathrm{M}}_{\mathrm{2}}}\left(\mathbit{s},{t}_{\mathrm{3}}\right)\end{array}\right].\end{array}$ A possible permutation (i) of the data would relabel ${X}_{n}^{\mathrm{R}}\left(\mathbit{s}\right)$ as ${X}_{n}^{{\mathrm{M}}_{\mathrm{1}}}\left(\mathbit{s}\right)$, ${X}_{n}^{{\mathrm{M}}_{\mathrm {1}}}\left(\mathbit{s}\right)$ as ${X}_{n}^{{\mathrm{M}}_{\mathrm{2}}}\left(\mathbit{s}\right)$, and ${X}_{n}^{{\mathrm{M}}_{\mathrm{2}}}\left(\mathbit{s}\right)$ as ${X}_{n}^{\mathrm{R}}\left(\ mathbit{s}\right)$, resulting in $\begin{array}{rl}{\stackrel{\mathrm{̃}}{\mathbf{X}}}_{n}\left(\mathbit{s}{\right)}_{\left(i\right)}& =\left[\begin{array}{c}{\stackrel{\mathrm{̃}}{X}}_{n}^{\mathrm{R}}\left(\mathbit{s}{\right)}_{\left (i\right)}\\ {\stackrel{\mathrm{̃}}{X}}_{\mathrm{1}}^{{\mathrm{M}}_{\mathrm{1}}}\left(\mathbit{s}{\right)}_{\left(i\right)}\\ \stackrel{\mathrm{̃}}{X}{}_{\mathrm{1}}^{{\mathrm{M}}_{\mathrm{2}}}\left(\ mathbit{s}{\right)}_{\left(i\right)}\end{array}\right]\\ & \equiv \left[\begin{array}{ccc}{X}_{n}^{{\mathrm{M}}_{\mathrm{1}}}\left(\mathbit{s},{t}_{\mathrm{1}}\right)& {X}_{n}^{{\mathrm{M}}_{\mathrm {1}}}\left(\mathbit{s},{t}_{\mathrm{2}}\right)& {X}_{n}^{{\mathrm{M}}_{\mathrm{1}}}\left(\mathbit{s},{t}_{\mathrm{3}}\right)\\ {X}_{n}^{{\mathrm{M}}_{\mathrm{2}}}\left(\mathbit{s},{t}_{\mathrm{1}}\ right)& {X}_{n}^{{\mathrm{M}}_{\mathrm{2}}}\left(\mathbit{s},{t}_{\mathrm{2}}\right)& {X}_{n}^{{\mathrm{M}}_{\mathrm{2}}}\left(\mathbit{s},{t}_{\mathrm{3}}\right)\\ {X}_{n}^{\mathrm{R}}\left(\mathbit {s},{t}_{\mathrm{1}}\right)& {X}_{n}^{\mathrm{R}}\left(\mathbit{s},{t}_{\mathrm{2}}\right)& {X}_{n}^{\mathrm{R}}\left(\mathbit{s},{t}_{\mathrm{3}}\right)\end{array}\right].\end{array}$ The permutation respects spatial location and time within the year while reordering the data label with respect to model. While this example fixed the spatial location s, the exact same permutation of the data labels would be used for all spatial locations s∈𝒮 within a specific year n. However, the data label ordering would be chosen independently across year. We illustrate the differences between the standard and stratified permutation tests for (time series) functional data in Fig. 2. The original data have three observations (indicated by unique colors). The first observation is part of the “reanalysis” group, while the next two are part of the “model” group. The original data are shown at monthly intervals over 3 years in panel (a). A standard permutation of the functional data simply relabels the group associated with each observation. In panel (b), the standard permutation shows that observation 2 has been relabeled as reanalysis data, while observation 1 has been relabeled as model data. The original structure of the data is completely preserved; the group labels are simply reassigned. In panel (c), we see a stratified permutation of the data. The data labels are randomly permuted in each year, but the data structure is completely preserved. In panel (c), we see that in year 1 observation 2 has been relabeled into the reanalysis group, while observation 1 has been relabeled into the model group. In year 2, observation 3 has been relabeled into the reanalysis group, while observation 1 has been relabeled into the model group. In year 3, the relabeling process results in the data residing in the original groups. These permuted data are treated in the same way as original data. The stratified permutation test for functional data results in substantially more permutations, correcting the power problem resulting from having a small number of available permutations. Its validity depends on the assumption that ${X}_{j}^{\mathrm{M}}\stackrel{\text{i.i.d.}}{\sim }{F}^{\mathrm{M}}$. The independence of the fields ${X}^{{M}_{j}}$ and ${X}^{{\mathrm{M}}_{{j}^{\prime }}}$ for $je {j}^{\prime }$ means that any functional computed from ${X}^{{M}_{j}}$ is independent of any functional computed from ${X}^{{\mathrm{M}}_{{j}^{\prime }}}$. An analogous statement is true for the equality of the distributions of these fields. The i.i.d. assumption cannot, thus, be fully verified. However, it is possible to provide some evidence to support it. One can proceed as follows. Choose at random K=100 locations, and for each year n compute ${G}_{j,n}=\frac{\mathrm{1}}{K}\sum _{k=\mathrm{1}}^{K}\frac{\mathrm{1}}{\mathrm{12}}\sum _{i=\mathrm{1}}^{\mathrm{12}}{X}_{n}^{{M}_{j}}\left({\mathbit{s}}_{k},{t}_{i}\right),\phantom{\rule{0.33em} {0ex}}\phantom{\rule{0.33em}{0ex}}\phantom{\rule{0.33em}{0ex}}j=\mathrm{1},\mathrm{2},\mathrm{\dots }\mathrm{15}.$ That is, we average the temperature values in year n across 100 randomly selected spatial locations for all 12 months in year n. The function G[j,n] is an example of a relevant functional of the field ${X}^{{M}_{j}}$ of our infinitely many possible functionals. After computing cross-correlations and applying the Cramér–von Mises test of the equality in the distribution, we found strong support for the assumption of independence and a somewhat weaker, but still convincing, piece of evidence for the equality of distributions. We created a simulation study to better understand the properties of the proposed stratified permutation test compared to a standard permutation test. We set up the study with two goals in mind. First, we wanted to confirm that the proposed method controls the type I error rate for individual tests. This is a minimum requirement for almost any statistical test, so we verify it for the proposed procedures. Second, we wanted to investigate power-related properties of the two tests after adjusting for multiple comparisons. In practice, our testing procedure will be used to perform an inference for a large number of spatial locations. If we only control the type I error rate for individual tests, then definitive statistical conclusions cannot be drawn from the results of many tests since we are unable to quantify the number of errors to expect. Thus, we must use an appropriate multiple comparisons procedure to draw definitive conclusions from our tests. We will make the appropriate adjustments to the multiple comparisons procedure and then compare the power of the two testing procedures. 4.1Simulation setup We desired to create simulation data that approximated the kind of data we will be investigating later in this paper. As a first step, we determined the mean and standard deviation of each spatial location for each month across the years 1950–2004 (660 time steps) for the MBCn bias-corrected gridMET data. We then focused on a 32×42 subgrid in the study area for the 300 months between January 1980 and December 2004. Figure 3 displays a heat map of the average temperature of the gridMET data in January 1980 for the 32×42 subgrid. We represent each spatial location by a grid cell for plotting purposes. We considered three main simulation scenarios. For each distinct simulation scenario, we generated 100 different replications of the scenario. Each replication utilizes 10 data realizations: 9 playing the role of climate model output and the last 1 playing the role of reanalysis data. In the first scenario, all 10 data realizations came from the same data-generating “null” distribution. In the second scenario, the mean of the reanalysis data was shifted by some amount each month for all time steps (described in more detail below). In the third scenario, the mean of the reanalysis data was shifted by some amount each month, starting two-thirds of the way through the time steps (described in more detail below) for a subset of the spatial locations. For each null data set, we simulated AR(1) (autoregressive model of order 1) temporal processes at each spatial location with correlation parameter ρ=0.1 and means and standard deviations equal to the corresponding time step of the gridMET data. To induce spatial correlation at each time step, each simulated response was averaged with the response of the four neighbors that its grid cell shared a border with. To avoid edge effects, we then restricted our analysis to the 30×40 interior grid of the 32×42 subgrid for a total of 1200 spatial locations. We generated two types of non-null data sets. We generated them using the same basic process as the null data except that the monthly mean temperatures of the reanalysis data sets differ from the monthly mean temperatures for the climate model output data sets. The means differ for the reanalysis data sets in one of the following ways compared to the climate model output data sets: (i) the mean for each month is ${m}_{t}+c×{\text{SD}}_{t}$, where m[t] and SD[t] are the temperature mean and standard deviation for that month, respectively, computed from the gridMET data, and where c is a scaling constant, or (ii) starting in January of the 21st year and in each subsequent month, the mean for each month is ${m}_{t}+c×{\text{SD}}_{t}$ in a contiguous subsection of the study area. We depict this subsection in Fig. 3. In this way, we are able to assess the performance of the procedures when there is a change in the data structure for only a part of the time period considered and for all of the time period considered. Non-null data sets were generated with scaling constants $c=\mathrm{0.15},\mathrm{0.20},\mathrm{0.25},\mathrm{0.30}$, and 0.35 for the first non-null scenario and $c=\mathrm{1},\mathrm{1.25},\mathrm{1.5}$, and 2 for the second non-null scenario. Two types of tests were performed using each permutation procedure: (i) a test of distributional equality at each spatial location using the distributional equality statistic defined in Eq. (3) and (ii) a test of the difference in mean temperature between the reanalysis and climate model data at each spatial location. More specifically, let $\stackrel{\mathrm{^}}{\mathit{\theta }}\left(\mathbit{s}\right)=\frac{\mathrm{1}}{\mathrm{30}×\mathrm{12}}\sum _{n=\mathrm{1}}^{\mathrm{30}}\sum _{t=\mathrm{1}}^{\mathrm{12}}{X}_{n}\left(\mathbit denote the 30-year average temperature for a particular set of functions. We tested whether there is a difference in the mean temperature between the reanalysis and climate model data using the following statistic: $\stackrel{\mathrm{^}}{T}\left(\mathbit{s}\right)=\frac{\mathrm{1}}{\mathrm{10}}\sum _{j=\mathrm{1}}^{\mathrm{10}}|{\stackrel{\mathrm{^}}{\mathit{\theta }}}^{{M}_{j}}\left(\mathbf{s}\right)-{\ stackrel{\mathrm{^}}{\mathit{\theta }}}^{\mathrm{R}}\left(\mathbf{s}\right)|,$ where M[j] and R denote the functions for a specific climate model and the reanalysis data, respectively. The smallest possible p value for the standard permutation test was 0.10 since there are only 10 possible permutations of the data. Conversely, the smallest possible p value for the stratified permutation test was 0.001 since those tests were implemented with B=999 permutations. 4.2Simulation results We begin by verifying that the standard and proposed stratified permutation tests satisfy the minimum standard of controlling for the type I error rate at individual locations. We compute the empirical type I error rate at individual locations using the 100 simulated null data sets described in Sect. 4.1. To reduce the dependence between tests for a particular data set, we randomly selected 20 spatial locations from each replication and then computed the empirical type I error rates for the associated tests across the $\mathrm{100}×\mathrm{20}=\mathrm{200}$ tests for various significance levels. Since the tests are applied to the null data, a false positive occurs anytime the p value for a test is less than the nominal significance level. Figure 4 displays the empirical type I error rates associated with each significance level for the standard and stratified permutation tests. Different colored symbols are used to distinguish the results for each permutation procedure. The vertical black lines indicate the 95% tolerance intervals for the empirical type I error rates associated with each significance level. The standard permutation tests can only have associated p values of $\mathrm{0.1},\mathrm{0.2},\mathrm{\dots },\mathrm{1}$. The empirical type I error rates are close to the associated significance level, as expected. One of the 10 empirical type I error rates for the standard permutation tests is outside the tolerance intervals. Zero of the 20 empirical type I error rates are outside the tolerance envelopes for the stratified permutation tests. Next, we evaluated the power of the standard and proposed stratified permutation tests after adjusting for multiple comparisons. The previous results were only meant to confirm that the testing procedures we utilized satisfied a minimum acceptable standard for a statistical test. When performing many statistical tests, such as in the present context, it is imperative that an appropriate adjustment is made to control a relevant error rate for the many tests, such as the familywise error rate (FWER) or the false discovery rate (FDR). Controlling the FWER in the context of many tests often leads to undesirably low statistical power. Conversely, statistical power is greater when the FDR is controlled instead. Benjamini and Hochberg (1995) proposed a simple procedure for controlling the FDR in the context of multiple comparisons. Benjamini and Yekutieli (2001) proposed a simple procedure for controlling the FDR when test statistics are correlated in a multiple testing context, which we call the BY procedure. Because there was at least some spatial dependence in the tests we performed, we adjusted the p values for our tests using the BY procedure before determining significance. For a specific non-null scenario with a fixed level of c, we computed the empirical power for a specified significance level by determining the proportion of spatial locations across the 100 replicated data sets that had an adjusted p value less than the significance level. We first summarize the power results for the first non-null scenario, in which there is a difference in the mean temperature of the reanalysis and climate model data for all spatial locations and times. We computed results for both the standard and stratified permutation tests. Figure 5 displays the power results for this scenario when the procedure proposed by Benjamini and Yekutieli (2001) is used to adjust for multiple comparisons. The empirical power of the standard permutation test was zero for all levels of significance, so we do not show the results. When c=0.15 for the stratified permutation test, the power is low to begin with but starts to increase with the significance level. As c increases to 0.35, the power of the stratified tests increases to 1 for all levels of significance. Conversely, because its p values were bounded below by 0.1, the standard permutation test was never able to identify a single location where the distribution of the reanalysis and climate model data differed, regardless of how large the difference was. Next, we summarize the results of the second non-null scenario, in which the mean is shifted for only 121 of the locations for the last 10 years of available data. Figure 5 displays the empirical power when the BY procedure is used to adjust for multiple comparisons. Because the difference in the average temperature was only present for the last 10 years of time, it was more difficult to identify a significant distributional difference at the non-null locations. When c=1, the stratified procedure struggled to detect any differences at usual significance levels, but it was still able to identify some differences for larger significance levels. As c increased to 1.25, 1.5, and 2, the empirical power of the stratified permutation test continued to improve. The empirical power for the standard permutation test was zero for all c and significance levels for this non-null scenario, so its results are not shown in Fig. 5. In our simulation study, the stratified permutation test exhibits satisfactory power when adjustment is made for the multiple comparisons through the BY procedure. We conclude that if we want adequate power to discover distributional differences between the reanalysis and climate model data sets, the standard permutation test is inadequate. The application of the proposed stratified functional permutation test is required. 5Climate model evaluation results We now compare different distributional aspects of the 15 climate model data sets produced by the NA-CORDEX program and the ERA5 reanalysis data, both of which are discussed in more detail in Sect. 2 . We perform separate comparisons for each bias-corrected data set (gridMET and Daymet). We first examine distributional equality for the reanalysis and climate model data. We initially test global distributional equality between F^R and F^M using the test statistic in Eq. (4). For both bias-corrected data sets, the standard permutation tests both return a p value of 0.0625, the lowest possible value for the 16 effective permutations. The stratified permutation test, using a random sample of 999 stratified permutations, returns a p value of 0.001. Next, we consider the spatial test of distributional equality using the test statistic in Eq. (3). We emphasize that we adjust for the multiple comparisons problem using the BY procedure. Figure 6 displays heat maps of the p values less than 0.10 since that is widely used as the largest acceptable level of significance for a hypothesis test. We do not color locations where the p value is more than 0.10. We follow this same pattern in other graphics for this section. For both bias-corrected data sets, substantial portions of the domain exhibit evidence that the distributions of the reanalysis and climate model data are not in agreement. Next, we identify ways in which the distributions differ. We test the hypotheses in Eq. (5) regarding θ^R(s)=θ^M(s) for several characteristics: the 55-year mean temperature, median temperature, temperature standard deviation, and temperature interquartile range. As mentioned in Sect. 3.4.1, we need to determine a suitable test statistic for testing these hypotheses. In our context, we wish to assess discrepancies in the 55-year behavior of the climate model characteristics in comparison to the reanalysis characteristics. Thus, it seems sensible to begin by summarizing the characteristics of the 55-year functional time series. Specifically, for the partial realization of X(s), we compute a statistic that summarizes some characteristic of the distribution over the 55 years. For example, the mean behavior would be summarized by $\stackrel{\mathrm{^}}{\mathit{\theta }}\left(\mathbit{s}\right)=\frac{\mathrm{1}}{\mathrm{55}×\mathrm{12}}\sum _{n=\mathrm{1}}^{\mathrm{55}}\sum _{t=\mathrm{1}}^{\mathrm{12}}{X}_{n}\left(\mathbit which we use to summarize the average temperature over the specified time frame. Similarly, since there are $\mathrm{55}×\mathrm{12}=\mathrm{660}$ observed values in each time series, the 0.50 quantile statistic would be the empirical 0.50 quantile of the 660 observed data values at location s, i.e., of the set $\mathit{\left\{}{x}_{\mathrm{1}}\left(\mathbit{s},\mathrm{1}\right),{x}_{\ mathrm{1}}\left(\mathbit{s},\mathrm{2}\right),\mathrm{\dots },{x}_{\mathrm{1}}\left(\mathbit{s},\mathrm{12}\right),\mathrm{\dots },{x}_{\mathrm{55}}\left(\mathbit{s},\mathrm{1}\right),\mathrm{\dots },{x}_{\mathrm{55}}\left(\mathbit{s},\mathrm{12}\right)\mathit{\right\}}$. In order to quantify the discrepancy of the test statistics across the reanalysis and climate model data, we use the average absolute difference between statistics across the reanalysis and climate model groups. Formally, we compute the statistic $\begin{array}{}\text{(6)}& \stackrel{\mathrm{^}}{T}\left(\mathbit{s}\right)=\frac{\mathrm{1}}{{N}_{M}}\sum _{j=\mathrm{1}}^{{N}_{M}}|{\stackrel{\mathrm{^}}{\mathit{\theta }}}^{\mathrm{R}}\left(\ mathbit{s}\right)-{\stackrel{\mathrm{^}}{\mathit{\theta }}}^{{M}_{j}}\left(\mathbit{s}\right)|\end{array}$ for each spatial location and assess the significance using standard and stratified permutation tests. For all tests below, we evaluate the significance of the test at each spatial location after using the BY procedure to control the FDR. In the following discussion, we will typically drop the “55-year” qualifier for parameters and statistics for brevity. We first examine the results related to measures of center. We test equality of the mean temperature of the reanalysis and climate model data at each spatial location, and similarly, we test equality of the median temperature. Figure 7 displays heat maps of the BY-adjusted p values ≤0.10 for these tests. The temperature means for the reanalysis and climate model data tend to differ in the western part of the United States and along the eastern coastline of the United States. We also see evidence of mean temperature differences in much of Canada and Mexico for the Daymet data. There are fewer locations exhibiting a difference in median temperature along the eastern coastline of the United States compared to a difference in mean temperature, though the opposite pattern is observed in the middle part of the United States. For the Daymet data, the locations exhibiting a mean or median temperature difference in Mexico tend to be similar, though there are fewer locations in Canada exhibiting a difference in median temperature. Next, we consider tests for dispersion-related parameters, specifically the standard deviation and interquartile range (IQR) of the data. We test equality of the temperature standard deviation for the reanalysis and climate model data at each spatial location, and similarly, we test equality of the temperature interquartile range. Figure 8 displays heat maps of the BY-adjusted p values for the locations where the p value ≤0.10 for these tests. Overall, we see similar patterns for a fixed characteristic (standard deviation or interquartile range) across both bias-corrected data sets. Similarly, if we fix the data set (gridMET or Daymet), we see similar p-value patterns across the measure of spread. However, there are noticeably fewer locations with adjusted p values ≤0.10 for the tests of equality of the temperature interquartile range compared to tests for standard deviation. Lastly, we focus on the results of tests related to characterizing the functional nature of the data. We want to formally compare the functional behavior of the reanalysis and climate data over time at each spatial location. Consequently, we fit a B-spline with 276 equidistant knot locations over the 660 months of temperature data available at each spatial location (essentially five knots per year), resulting in 276 estimated coefficients for each spatial location. We then compared whether the coefficients associated with the reanalysis data were equal to the coefficients for the climate model data. Such a test allows us to determine the times when the climate model data patterns disagree with the reanalysis data. For each spatial location, we computed the statistic $\begin{array}{}\text{(7)}& \stackrel{\mathrm{^}}{T}\left(\mathbit{s}\right)=\frac{\mathrm{1}}{\mathrm{276}{N}_{M}}\sum _{j=\mathrm{1}}^{{N}_{M}}\sum _{k=\mathrm{1}}^{\mathrm{276}}|{\stackrel{\mathrm {^}}{\mathit{\theta }}}_{k}^{\mathrm{R}}\left(\mathbit{s}\right)-{\stackrel{\mathrm{^}}{\mathit{\theta }}}_{k}^{{M}_{j}}\left(\mathbit{s}\right)|,\end{array}$ where ${\stackrel{\mathrm{^}}{\mathit{\theta }}}_{k}^{\mathrm{R}}\left(\mathbit{s}\right)$ is the kth estimated coefficient for the reanalysis data at location s, and ${\stackrel{\mathrm{^}}{\mathit {\theta }}}_{k}^{{M}_{j}}\left(\mathbit{s}\right)$ is the kth estimated coefficient for the jth climate model data set at location s. Figure 9 displays heat maps of the p values for the locations where the test is significant at α=0.10. Perhaps unsurprisingly, the results for this test are similar to those for the test of distributional equality. We see significant differences in the coefficients for the reanalysis and climate model data in the western part of the United States, as well as the northern parts of Canada and the central parts of Mexico. Comparison of our results with previous results is difficult as the studies we are familiar with focus on evaluating specific distributional characteristics of climate models compared to observational data in specific places. Additionally, the reference data sets may differ, making it difficult to compare analyses. Lee et al. (2019) provide the most similar comparison to our present analysis, in which they compare temperature trends between reference and climate model data over seven regions in the continental United States. This comparison was made for summer and winter seasons for three time periods: 1895–1939, 1940–1979, and 1980–2005. Lee et al. (2019) aggregate their results over seven large regions, whereas we make an inference at a finer spatial scale. The time periods of our comparison also differ. Additionally, Lee et al. (2019) separate summer and winter behavior so that they can look at trends, whereas we consider behavior over the entire year. A key difference in our comparisons is that we use the ERA5 data as our reference data set, while Lee et al. (2019) use the Global Historical Climatology Network – Daily (GHCN-Daily) data set (Menne et al. , 2012). Those caveats aside, our analysis tends to find the most agreement between the temperature distributions of the reference data and the climate model data in the middle part of the United States with less similarity in the eastern and western parts. The analysis by Lee et al. (2019) tended to find more similar temperature trends in the eastern and western parts of the United States and less similarity in the middle part of the United States (see Fig. 7 of Lee et al., 2019). However, little can be concluded from this disagreement, because our analyses differ in approach, reference data set, and temperature characteristic; their similarity is limited to the variable of interest (temperature) and location (continental United States). We have presented a new stratified permutation test appropriate for comparing the distributional characteristics of climate model and reanalysis data. In our context, a standard permutation test, even when adjusted to preserve spatial and temporal dependence, is not effective for performing comparisons because there are few unique permutations, limiting the discriminating power of the test. The proposed permutation procedure allows for the creation of millions of unique permutations, which substantially improves the power of the testing procedure for usual significance levels. Additionally, the new testing procedure makes it possible to apply proven approaches for addressing the multiple comparisons problem, which are ineffective in the context of standard permutation We applied our stratified permutation test in comparing the distributional characteristics of bias-corrected NA-CORDEX climate model data output to the ERA5 reanalysis data for monthly temperature data over the years 1950–2004. We used the testing procedure proposed by Benjamini and Yekutieli (2001) to control the FDR of our tests. The temperature distributions of the NA-CORDEX and ERA5 data sets tended to be most similar in the middle and eastern parts of the United States, with distributions tending to significantly differ in parts of Canada and most of Mexico. Our analysis focused mostly on simple characteristics of the data like the 55-year mean, median, standard deviation, and interquartile range of the temperatures. We also considered a broader test of distributional equality and a comparison of the coefficients of a functional representation of the temperature time series. However, these tests could be done for more refined characteristics of the data such as looking at features for particular seasons (average temperature in a particular month), looking at distributional changes over particular time periods (the decadal changes in the interquartile range of temperature) or looking at smaller-scale characteristics (rate of change characteristics for hourly level data). A possible critique of our analysis is that the NA-CORDEX climate model data sets may not be independent. Specifically, several of the data sets use conditions from the same GCM, so one could argue that the data for RCM–GCM combinations sharing the same GCMs are not independent. To assess the impact of this, we ran a secondary analysis using only the NA-CORDEX climate model output for models using different GCMs. The results for the secondary analysis, shown in the Supplement, are very similar to the results discussed in Sect. 5. Another reasonable critique might be that different GCMs do not follow the same probability distribution. However, without this assumption, no replications could be considered, and statistical inference would be practically infeasible. Based on the similarity of the power results for tests of distributional equality and mean equality in Sect. 4.2, one may wonder whether one test is preferred over the other. The choice only depends on the goals of the researcher. A test of distributional equality can only inform the researcher of whether the overall distribution of the reanalysis data is similar to the climate model output; rejecting the null hypothesis does not tell the researcher how the distributions differ. Do they differ with respect to center, spread, quantile behavior, and so on? Conversely, a test based on specific distributional attributes like the mean, median, or interquartile range only evaluates whether the distributions differ with respect to a single characteristic. Failing to reject the null hypothesis does not mean that the distributions being compared do not differ; it only means they do not differ significantly with respect to that single characteristic. Ultimately, the tests provide complementary information, and the researcher must choose the information that is most important for their study. Angélil et al. (2016) recommend using multiple reanalysis data sets when performing climate model evaluation, so one could augment the presented analysis by including reanalysis data from NASA's MERRA2 (Modern-Era Retrospective analysis for Research and Applications, Version 2) program and the Japanese 55-year Reanalysis (JRA-55; Kobayashi et al., 2015). Those data sets have different spatial domains and time periods over which the data are available, so adjustments would have to be made to account for these differences. However, we hope to provide a more thorough analysis involving these additional reanalysis data sets to investigate the similarity of behavior between the reanalysis data and the climate model output data. Additionally, our present investigation focused only on temperature, which tends to behave well in the sense of having a relatively symmetric, bell-shaped distribution when considering observations at a similar time and place. Another variable of great scientific interest is precipitation, which behaves very differently from temperature. Precipitation data can be highly skewed and zero-inflated, which can require additional analysis considerations that are beyond the scope of this paper, even when minimal distributional assumptions are made with respect to the proposed test. Additionally, Dee et al. (2016) warn that “Diagnostic variables relating to the hydrological cycle, such as precipitation and evaporation, should be used with extreme caution”. However, we hope to investigate the behavior of precipitation for reanalysis and climate model output data in future efforts. Some readers may be interested in comparing the spatial patterns of p values for the BY-adjusted p values of the stratified permutation test with the unadjusted and BY-adjusted p values of the standard permutation test as well as the unadjusted p values of the stratified permutation test. The standard permutation procedure has much lower testing power than the stratified permutation test. While one can have significant results at many locations when using the standard p values, the BY-adjusted p values will have zero significant locations. One implementation of the Benjamini–Yekutieli (BY) procedure takes the standard p values and adjusts them upward so that testing can be performed at a fixed significance level while addressing the multiple comparisons problem. Since the BY-adjusted p values are uniformly larger than the unadjusted p values, any location significant after the BY p-value adjustment is automatically significant at the same significance level for the unadjusted p values. However, we would also expect additional significant locations when using the unadjusted p values; the locations significant for the unadjusted p values will extend from the locations that are significant for the adjusted p values. Figure S5 in the Supplement provides a visual comparison of this behavior for a test of distributional equality based on the statistic in Eq. (3). Our stratified permutation test is highly scalable since tests can be parallelized across permutations or spatial locations. In the analyses we considered, the time needed to perform the tests increased linearly with the number of spatial locations and time steps. We analyzed monthly rather than daily data in order to reduce the run time and because our focus was on decadal climate patterns. However, especially for shorter time periods, there could be distributional characteristics that can only be studied through the examination of daily or even hourly data. If the data sets cannot be held in memory at one time, then the stratified permutation test can still be applied by summarizing statistics one location at a time, assuming that the spatiotemporal data are structured so that the responses for specific spatial locations at specific time steps for a specific model can be accessed conveniently. This modified implementation of the test would likely be slower than when the data can be held in memory, but this would allow for the analysis of much larger data sets. Alternatively, one could first represent the data in a functional form prior to analysis, e.g., using the spatiotemporal sandwich smoother of French and Kokoszka (2021), which would dramatically reduce the memory space needed to represent the data structure or smoothed values. Tests could then be performed using the smoothed data, the functional parameters, or the related characteristics. We hope that the methodology we developed and the insights we presented will stimulate related research on comparing model and historical climate data using the increasingly available data products. Code and data availability The original NA-CORDEX data are available at https://doi.org/10.5065/D6SJ1JCH (Mearns et al., 2017). The ERA5 data are available at https://doi.org/10.24381/cds.adbb2d47 (Hersbach et al., 2023). Due to the large volume of data that need to be acquired, processed, and analyzed, providing an easily reproducible analysis is impossible. However, we have attempted to make our code (French, 2024) as simple and generalizable as possible to reproduce our analysis. The French (2024) code may be accessed at https://doi.org/10.5281/zenodo.13228244. JPF: conceptualization, data curation, literature review, methodology, software, formal analysis, writing, visualization. PSK: conceptualization, literature review, methodology, writing. SM: conceptualization, literature review, writing, contextualization. The contact author has declared that none of the authors has any competing interests. Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims made in the text, published maps, institutional affiliations, or any other geographical representation in this paper. While Copernicus Publications makes every effort to include appropriate place names, the final responsibility lies with the authors. The ERA5 data were made available by the Copernicus Climate Change Service and modified for use in this paper. This material is based upon work supported by the NSF National Center for Atmospheric Research, which is a major facility sponsored by the National Science Foundation under cooperative agreement no. We thank the associate editor and two referees for carefully reading the manuscript. Their insightful comments greatly improved the quality of this paper. This research has been supported by the Directorate for Mathematical and Physical Sciences (grant nos. 1915277 and 1914882). Seth McGinnis was partially supported by the US Department of Energy Regional and Global Climate Modeling program award DOE DE-SC0016605. This paper was edited by Likun Zhang and reviewed by two anonymous referees. Angélil, O., Perkins-Kirkpatrick, S., Alexander, L. V., Stone, D., Donat, M. G., Wehner, M., Shiogama, H., Ciavarella, A., and Christidis, N.: Comparing regional precipitation and temperature extremes in climate model and reanalysis products, Weather and Climate Extremes, 13, 35–43, https://doi.org/10.1016/j.wace.2016.07.001, 2016.a The HadGEM2 Development Team: G. M. Martin, Bellouin, N., Collins, W. J., Culverwell, I. D., Halloran, P. R., Hardiman, S. C., Hinton, T. J., Jones, C. D., McDonald, R. E., McLaren, A. J., O'Connor, F. M., Roberts, M. J., Rodriguez, J. M., Woodward, S., Best, M. J., Brooks, M. E., Brown, A. R., Butchart, N., Dearden, C., Derbyshire, S. H., Dharssi, I., Doutriaux-Boucher, M., Edwards, J. M., Falloon, P. D., Gedney, N., Gray, L. J., Hewitt, H. T., Hobson, M., Huddleston, M. R., Hughes, J., Ineson, S., Ingram, W. J., James, P. M., Johns, T. C., Johnson, C. E., Jones, A., Jones, C. P., Joshi, M. M., Keen, A. B., Liddicoat, S., Lock, A. P., Maidens, A. V., Manners, J. C., Milton, S. F., Rae, J. G. L., Ridley, J. K., Sellar, A., Senior, C. A., Totterdell, I. J., Verhoef, A., Vidale, P. L., and Wiltshire, A.: The HadGEM2 family of Met Office Unified Model climate configurations, Geosci. Model Dev., 4, 723–757, https://doi.org/10.5194/gmd-4-723-2011, 2011.a Benjamini, Y. and Hochberg, Y.: Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing, J. Roy. Stat. Soc. B, 57, 289–300, 1995.a Benjamini, Y. and Yekutieli, D.: The control of the false discovery rate in multiple testing under dependency, Ann. Stat., 29, 1165–1188, https://doi.org/10.1214/aos/1013699998, 2001.a, b, c, d Bugni, F. and Horowitz, L.: Permutation tests for equality of distributions of functional data, J. Appl. Econom., 36, 861–877, 2021.a Cannon, A. J.: Multivariate Bias Correction of Climate Model Output: Matching Marginal Distributions and Intervariable Dependence Structure, J. Climate, 29, 7045–7064, https://doi.org/10.1175/ JCLI-D-15-0679.1, 2016.a Cannon, A. J.: Multivariate quantile mapping bias correction: an N-dimensional probability density function transform for climate model simulations of multiple variables, Clim. Dynam., 50, 31–49, Cannon, A. J., Sobie, S. R., and Murdock, T. Q.: Bias Correction of GCM Precipitation by Quantile Mapping: How Well Do Methods Preserve Changes in Quantiles and Extremes?, J. Climate, 28, 6938–6959, https://doi.org/10.1175/JCLI-D-14-00754.1, 2015.a Christensen, O. B., Drews, M., Christensen, J. H., Dethloff, K., Ketelsen, K., Hebestadt, I., and Rinke, A.: The HIRHAM regional climate model, Version 5 (beta), Danish Meteorological Institute, Denmark, Technical Report, 06-17, 2007.a Chylek, P., Li, J., Dubey, M. K., Wang, M., and Lesins, G.: Observed and model simulated 20th century Arctic temperature variability: Canadian Earth System Model CanESM2, Atmos. Chem. Phys. Discuss., 11, 22893–22907, https://doi.org/10.5194/acpd-11-22893-2011, 2011.a Copernicus Climate Change Service (C3S): ERA5: Fifth generation of ECMWF atmospheric reanalyses of the global climate, Copernicus Climate Change Service Climate Data Store (CDS), https:// cds.climate.copernicus.eu/cdsapp#!/home (last access: 10 March 2020), 2017.a, b Corain, L., Melas, V. B., Pepelyshev, A., and Salmaso, L.: New insights on permutation approach for hypothesis testing on functional data, Adv. Data Anal. Classi., 8, 339–356, 2014.a CORDEX: WCRP CORDEX, https://cordex.org/ (last access: 19 June 2020), 2020.a Dassanayake, S. and French, J. P.: An improved cumulative sum-based procedure for prospective disease surveillance for count data in multiple regions, Stat. Med., 35, 2593–2608, https://doi.org/ 10.1002/sim.6887, 2016.a Dee, D., Fasullo, J., Shea, D., Walsh, J., and National Center for Atmospheric Research Staff: Atmospheric Reanalysis: Overview & Comparison Tables, https://climatedataguide.ucar.edu/climate-data/ atmospheric-reanalysis-overview-comparison-tables (last access: 21 April 2020), 2016.a, b, c, d, e Dunne, J. P., John, J. G., Adcroft, A. J., Griffies, S. M., Hallberg, R. W., Shevliakova, E., Stouffer, R. J., Cooke, W., Dunne, K. A., Harrison, M. J., Krasting, J. P., Malyshev, S. L., Milly, P. C. D., Phillipps, P. J., Sentman, L. T., Samuels, B. L., Spelman, M. J., Winton, M., Wittenberg, A. T., and Zadeh, N.: GFDL’s ESM2 Global Coupled Climate–Carbon Earth System Models. Part I: Physical Formulation and Baseline Simulation Characteristics, J. Climate, 25, 6646–6665, https://doi.org/10.1175/JCLI-D-11-00560.1, 2012.a European Centre for Medium-Range Weather Forecasts: https://confluence.ecmwf.int/display/CKB/ERA5%3A+data+documentation (last access: 31 July 2023), 2023a.a European Centre for Medium-Range Weather Forecasts: https://www.ecmwf.int/en/forecasts/dataset/ecmwf-reanalysis-v5 (last access: 6 August 2023), 2023b.a Fisher, R. A.: Design of Experiments, Oliver and Boyd, Edinburgh, 1935.a French, J. P.: jfrench/ascmo_2024: v1.0, Zenodo [code], https://doi.org/10.5281/zenodo.13228244, 2024.a, b French, J. P. and Kokoszka, P. S.: A sandwich smoother for spatio-temporal functional data, Spatial Statistics, 42, 100413, https://doi.org/10.1016/j.spasta.2020.100413, 2021.a Garrett, R. C., Harris, T., and Li, B.: Evaluating Climate Models with Sliced Elastic Distance, arXiv [preprint], arXiv:2307.08685, 2023.a, b Giorgetta, M. A., Jungclaus, J., Reick, C. H., Legutke, S., Bader, J., Böttinger, M., Brovkin, V., Crueger, T., Esch, M., Fieg, K., Glushak, K., Gayler, V., Haak, H., Hollweg, H.-D., Ilyina, T., Kinne, S., Kornblueh, L., Matei, D., Mauritsen, T., Mikolajewicz, U., Mueller, W., Notz, D., Pithan, F., Raddatz, T., Rast, S., Redler, R., Roeckner, E., Schmidt, H., Schnur, R., Segschneider, J., Six, K. D., Stockhause, M., Timmreck, C., Wegner, J., Widmann, H., Wieners, K.-H., Claussen, M., Marotzke, J., and Stevens, B.: Climate and carbon cycle changes from 1850 to 2100 in MPI-ESM simulations for the Coupled Model Intercomparison Project phase 5, J. Adv. Model. Earth Sy., 5, 572–597, https://doi.org/10.1002/jame.20038, 2013.a, b Giorgi, F. and Anyah, R.: The road towards RegCM4, Clim. Res., 52, 3–6, 2012.a Good, P. I.: Permutation, parametric, and bootstrap tests of hypotheses, Springer, New York, https://doi.org/10.1007/b138696, 2006.a Hazeleger, W., Severijns, C., Semmler, T., Ştefănescu, S., Yang, S., Wang, X., Wyser, K., Dutra, E., Baldasano, J. M., Bintanja, R., Bougeault, P., Caballero, R., Ekman, A. M. L., Christensen, J. H., van den Hurk, B., Jimenez, P., Jones, C., Kållberg, P., Koenigk, T., McGrath, R., Miranda, P., van Noije, T., Palmer, T., Parodi, J. A., Schmith, T., Selten, F., Storelvmo, T., Sterl, A., Tapamo, H., Vancoppenolle, M.,Viterbo, P., and Willén, U.: EC-Earth: a seamless earth-system prediction approach in action, B. Am. Meteorol. Soc., 91, 1357–1364, 2010.a Hernández-Díaz, L., Nikiéma, O., Laprise, R., Winger, K., and Dandoy, S.: Effect of empirical correction of sea-surface temperature biases on the CRCM5-simulated climate and projected climate changes over North America, Clim. Dynam., 53, 453–476, 2019.a, b Hersbach, H., Bell, B., Berrisford, P., Hirahara, S., Horányi, A., Muñoz Sabater, J., Nicolas, J., Peubey, C., Radu, R., Schepers, D., Simmons, A., Soci, C., Abdalla, S., Abellan, X., Balsamo, G., Bechtold, P., Biavati, G., Bidlot, J., Bonavita, M., De Chiara, G., Dahlgren, P., Dee, D., Diamantakis, M., Dragani, R., Flemming, J., Forbes, R., Fuentes, M., Geer, A., Haimberger, L., Healy, S., Hogan, R. J., Hólm, E., Janisková, M., Keeley, S., Laloyaux, P., Lopez, P., Lupu, C., Radnoti, G., de Rosnay, P., Rozum, I., Vamborg, F., Villaume, S., and Thépaut, J.-N.: Complete ERA5 from 1940: Fifth generation of ECMWF atmospheric reanalyses of the global climate, Copernicus Climate Change Service (C3S) Data Store (CDS) [data set], https://doi.org/10.24381/cds.143582cf, 2017.a Hersbach, H., Bell, B., Berrisford, P., Hirahara, S., Horányi, A., Muñoz Sabater, J., Nicolas, J., Peubey, C., Radu, R., Schepers, D., Simmons, A., Soci, C., Abdalla, S., Abellan, X., Balsamo, G., Bechtold, P., Biavati, G., Bidlot, J., Bonavita, M., De Chiara, G., Dahlgren, P., Dee, D., Diamantakis, M., Dragani, R., Flemming, J., Forbes, R., Fuentes, M., Geer, A., Haimberger, L., Healy, S., Hogan, R. J., Hólm, E., Janisková, M., Keeley, S., Laloyaux, P., Lopez, P., Lupu, C., Radnoti, G., de Rosnay, P., Rozum, I., Vamborg, F., Villaume, S., and Thépaut, J.-N.: The ERA5 global reanalysis, Q. J. Roy. Meteor. Soc., 146, 1999–2049, https://doi.org/10.1002/qj.3803, 2020a.a, b Hersbach, H., Bell, B., Berrisford, P., Hirahara, S., Horányi, A., Muñoz Sabater, J., Nicolas, J., Peubey, C., Radu, R., Schepers, D., Simmons, A., Soci, C., Abdalla, S., Abellan, X., Balsamo, G., Bechtold, P., Biavati, G., Bidlot, J., Bonavita, M., De Chiara, G., Dahlgren, P., Dee, D., Diamantakis, M., Dragani, R., Flemming, J., Forbes, R., Fuentes, M., Geer, A., Haimberger, L., Healy, S., Hogan, R. J., Hólm, E., Janisková, M., Keeley, S., Laloyaux, P., Lopez, P., Lupu, C., Radnoti, G., de Rosnay, P., Rozum, I., Vamborg, F., Villaume, S., and Thépaut, J.-N.: ERA5 hourly data on single levels from 1940 to present, Copernicus Climate Change Service (C3S) Climate Data Store (CDS) [data set], https://doi.org/10.24381/cds.adbb2d47, 2020b.a Hersbach, H., Bell, B., Berrisford, P., Biavati, G., Horányi, A., Muñoz Sabater, J., Nicolas, J., Peubey, C., Radu, R., Rozum, I., Schepers, D., Simmons, A., Soci, C., Dee, D., and Thépaut, J.-N.: ERA5 hourly data on single levels from 1940 to present, Copernicus Climate Change Service (C3S) Climate Data Store (CDS) [data set], https://doi.org/10.24381/cds.adbb2d47, 2023.a Holmes, A. P., Blair, R. C., Watson, J. D. G., and Ford, I.: Nonparametric analysis of statistic images from functional mapping experiments, Journal of Cerebral Blood Flow & Metabolism, 16, 7–22, Horváth, L. and Kokoszka, P.: Inference for functional data with applications, vol. 200, Springer Science & Business Media, https://doi.org/10.1007/978-1-4614-3655-3, 2012.a Hurrell, J., Visbeck, M., and Pirani, P.: WCRP Coupled Model Intercomparison Project – Phase 5, CLIVAR Exchanges Newsletter, 15, 2011.a Intergovernmental Panel on Climate Change (IPCC): Evaluation of climate models, in: Climate change 2013: the physical science basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, 741–866, Cambridge University Press, 2014.a Jia, K., Ruan, Y., Yang, Y., and You, Z.: Assessment of CMIP5 GCM Simulation Performance for Temperature Projection in the Tibetan Plateau, Earth Space Sci., 6, 2362–2378, https://doi.org/10.1029/ 2019EA000962, 2019.a Kamworapan, S. and Surussavadee, C.: Evaluation of CMIP5 Global Climate Models for Simulating Climatological Temperature and Precipitation for Southeast Asia, Adv. Meteorol., 2019, 1067365, https:// doi.org/10.1155/2019/1067365, 2019.a Kobayashi, S., Ota, Y., Harada, Y., Ebita, A., Moriya, M., Onoda, H., Onogi, K., Kamahori, H., Kobayashi, C., Endo, H., Miyaoka, K., and Takahashi, K.: The JRA-55 Reanalysis: General Specifications and Basic Characteristics, J. Meteorol. Soc. Jpn. Ser. II, 93, 5–48, https://doi.org/10.2151/jmsj.2015-001, 2015.a Lee, J., Waliser, D., Lee, H., Loikith, P., and Kunkel, K. E.: Evaluation of CMIP5 ability to reproduce twentieth century regional trends in surface air temperature and precipitation over CONUS, Clim. Dynam., 53, 5459–5480, https://doi.org/10.1007/s00382-019-04875-1, 2019.a, b, c, d, e, f, g Martynov, A., Laprise, R., Sushama, L., Winger, K., Šeparović, L., and Dugas, B.: Reanalysis-driven climate simulation over CORDEX North America domain using the Canadian Regional Climate Model, version 5: model performance evaluation, Clim. Dynam., 41, 2973–3005, 2013.a Matchett, J. R., Stark, P. B., Ostoja, S. M., Knapp, R. A., McKenny, H. C., Brooks, M. L., Langford, W. T., Joppa, L. N., and Berlow, E. L.: Detecting the influence of rare stressors on rare species in Yosemite National Park using a novel stratified permutation test, Sci. Rep., 5, 10702, https://doi.org/10.1038/srep10702, 2015.a, b McGinnis, S. and Mearns, L.: Building a climate service for North America based on the NA-CORDEX data archive, Climate Services, 22, 100233, https://doi.org/10.1016/j.cliser.2021.100233, 2021.a Mearns, L. O., McGinnis, S., Korytina, D., Arritt, R., Biner, S., Bukovsky, M., Chang, H.-I., Christensen, O., Herzmann, D., Jiao, Y., Kharin, S., Lazare, M., Nikulin, G., Qian, M., Scinocca, J., Winger, K., Castro, C., Frigon, A., and Gutowski, W.: The NA-CORDEX dataset, version 1.0, NCAR Climate Data Gateway, Boulder CO [data set], https://doi.org/10.5065/D6SJ1JCH, 2017.a, b Menne, M. J., Durre, I., Vose, R. S., Gleason, B. E., and Houston, T. G.: An Overview of the Global Historical Climatology Network-Daily Database, J. Atmos. Ocean. Tech., 29, 897–910, https://doi.org /10.1175/JTECH-D-11-00103.1, 2012.a NA-CORDEX: Missing Data, https://na-cordex.org/missing-data.html (last access: 24 June 2020), 2020.a Nichols, T. E. and Holmes, A. P.: Nonparametric permutation tests for functional neuroimaging: a primer with examples, Human brain mapping, 15, 1–25, 2002.a Oh, S.-G., Kim, B.-G., Cho, Y.-K., and Son, S.-W.: Quantification of The Performance of CMIP6 Models for Dynamic Downscaling in The North Pacific and Northwest Pacific Oceans, Asia-Pac. J. Atmos. Sci., 59, 367–383, 2023.a Phillips, N. A.: The general circulation of the atmosphere: A numerical experiment, Q. J. Roy. Meteor. Soc., 82, 123–164, https://doi.org/10.1002/qj.49708235202, 1956.a Raäisaänen, J.: How reliable are climate models?, Tellus A, 59, 2–29, 2007.a Randall, D. A., Wood, R. A., Bony, S., Colman, R., Fichefet, T., Fyfe, J., Kattsov, V., Pitman, A., Shukla, J., Srinivasan, J., Stouffer, R. J., Sumi, A., and Taylor, K. E.: Climate models and their evaluation, in: Climate change 2007: The physical science basis. Contribution of Working Group I to the Fourth Assessment Report of the IPCC (FAR), 589–662, Cambridge University Press, 2007.a Reiss, P., Huang, L., and Mennes, M.: Fast function-on-scalar regression with penalized basis expansions, Int. J. Biostat., 6, article 28, 2010.a Samuelsson, P., Jones, C. G., Willén, U., Ullerstig, A., Gollvik, S., Hansson, U., Jansson, C., Kjellström, E., Nikulin, G., and Wyser, K.: The Rossby Centre Regional Climate model RCA3: model description and performance, Tellus A, https://doi.org/10.1111/j.1600-0870.2010.00478.x, 2011. a Scinocca, J. F., Kharin, V. V., Jiao, Y., Qian, M. W., Lazare, M., Solheim, L., Flato, G. M., Biner, S., Desgagne, M., and Dugas, B.: Coordinated Global and Regional Climate Modeling, J. Climate, 29, 17–35, https://doi.org/10.1175/JCLI-D-15-0161.1, 2016.a Šeparović, L., Alexandru, A., Laprise, R., Martynov, A., Sushama, L., Winger, K., Tete, K., and Valin, M.: Present climate and climate change over North America as simulated by the fifth-generation Canadian regional climate model, Clim. Dynam., 41, 3167–3201, 2013.a Skamarock, W. C., Klemp, J. B., Dudhia, J., Gill, D. O., Barker, D. M., Duda, M. G., Huang, X.-Y., Wang, W., and Powers, J. G.: A description of the advanced research WRF version 3, NCAR technical note, 475, 113, 2008.a Vissio, G., Lembo, V., Lucarini, V., and Ghil, M.: Evaluating the Performance of Climate Models Based on Wasserstein Distance, Geophys. Res. Lett., 47, e2020GL089385, https://doi.org/10.1029/ 2020GL089385, 2020.a Wilks, D. S.: Resampling Hypothesis Tests for Autocorrelated Fields, J. Climate, 10, 65–82, https://doi.org/10.1175/1520-0442(1997)010<0065:RHTFAF>2.0.CO;2, 1997.a
{"url":"https://ascmo.copernicus.org/articles/10/123/2024/ascmo-10-123-2024.html","timestamp":"2024-11-09T23:36:57Z","content_type":"text/html","content_length":"394269","record_id":"<urn:uuid:c1185ae3-a030-4cdc-84e9-2603c327f245>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00117.warc.gz"}
Deep (∼2000 m) observations near the Sigsbee escarpment in the Gulf of Mexico show short-period (approximately 5-12 days) energetic currents due to topographic Rossby waves (TRW's). We suggest that the phenomenon is due to the focusing and accumulation of TRW energy by the slopes coupled with a bend in isobaths, in a topographic caustic (topocaustic). The idea draws on a simple mathematical equivalence between the propagation of internal waves and of TRW's. Topocaustics occur near regions of maximum N[T] = N|∇h| (N = Brunt-Väisälä frequency; h = water depth). Because of the one-sided propagation property of TRW's, energy also tends to accumulate at the "western" end of closed contours of N[T]. The process is demonstrated here using a nonlinear primitive-equation numerical model with idealized bathymetry and forcing. A Gulf of Mexico simulation initialized with a data-assimilated analysis covering the period of the Sigsbee observation is then conducted. The mooring is near a localized maximum N[T], and Intrinsic Mode Functions confirm the existence of energetic bursts of short-period deep-current events. The strong currents are locally forced from above, either by an extended Loop Current or a warm ring.
{"url":"https://scholars.ncu.edu.tw/zh/publications/topocaustics","timestamp":"2024-11-06T01:05:46Z","content_type":"text/html","content_length":"53502","record_id":"<urn:uuid:f4a3a8c7-5cac-4cda-9616-2af037bfd6b1>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00127.warc.gz"}
Quantum Computers Explained | AtomsTalk Quantum Computers Explained Quantum computers are computers which uses quantum phenomena such as entanglement and superposition to perform computing functions. Where a traditional computer uses bits, which are strings of numbers (think of the binary system, with 0s and 1s), to store information, a quantum computer uses a qubit. One can imagine a qubit like storing 0s and 1s in two distinguishable quantum states. To understand quantum computing, we first need to understand a couple of terms and definitions in quantum Quantum Mechanical Background In physics and chemistry, the term quantum refers to the minimum amount of anything that can participate or is involved in an interaction. We refer to physical quantities being “quantized”. This implies that physical properties can only take discrete values (like steps), and not continuous ones (like an analogue clock). Quantum Model of an Atom (Source) Quantum superposition within quantum mechanics is a fundamental principle that states that, like waves in classical physics, any two quantum states can be added together, or superposed. A quantum state is a mathematical expression that describes the probability distribution for the outcomes of each possible measurement on a system. The result of superposition will be a new valid quantum state, and every quantum state can be expressed as the result of the superposition of two or more distinct quantum states. In the quantum model, entanglement is a physical phenomenon that occurs when a pair or group of particles are generated, interact or share special proximity in such a way that the quantum state of each particle of the pair or of the group cannot be described independently of the quantum state of the other particles, even when the particles are separated by a large distance. Quantum entanglement of two photons (Source) How do Quantum Computers Work? A quantum computer is a model of how to build a computer. The idea is that quantum computers can use certain phenomena from quantum mechanics, such as superposition and entanglement, to perform operations on data. The basic principle behind quantum computation is that quantum properties can be used to represent data and perform operations on it. A theoretical model is the quantum Turing machine, also known as the universal quantum computer. The idea of quantum computing is still very new. Experiments have been done. In these, a very small number of operations were done on qubits (quantum bit). Both practical and theoretical research continues with interest, and many national governments and military funding agencies support quantum computing research to develop quantum computers for both civilian and military purposes, such as cryptanalysis. Today’s computers, called “classical” computers, store information in binary; each bit is either on or off. Quantum computation use qubits, which, in addition to being possibly on or off, can be both on and off, which is a way of describing superposition, until a measurement is made. The state of a piece of data on a normal computer is known with certainty, but quantum computation uses probabilities. Only very simple quantum computers have been built, although larger designs have been invented. If large-scale quantum computers can be built, they will be able to solve some problems much more quickly than any computer that exists today (such as Shor’s algorithm). Quantum computers are different from other computers such as DNA computers and traditional computers based on transistors. Some computing architectures such as optical computers may use classical superposition of electromagnetic waves. Without quantum mechanical resources such as entanglement, people think that an exponential advantage over classical computers is not possible. Quantum computers cannot perform functions that are not theoretically computable by classical computers, in other words, they do not alter the Church-Turing thesis. They would, however, be able to do many things much more quickly and efficiently. Quantum Information Quantum information is the information of the state of a quantum system. It is the basic entity of study in quantum information theory and can be manipulated using quantum information processing techniques. It is an interdisciplinary field that involves quantum mechanics, computer science, information theory, philosophy and cryptography among other fields. Qubits and Quantum Information Unlike classical digital states (which are discrete), a qubit is continuous-valued, describable by a direction on the Bloch sphere. Despite being continuously valued in this way, a qubit is the smallest possible unit of quantum information, and despite the qubit state being continuously-valued, it is impossible to measure the value precisely. Five famous theorems describe the limits on the manipulation of quantum information. 1. No-teleportation theorem, which states that a qubit cannot be (wholly) converted into classical bits; that is, it cannot be “read”. 2. No-cloning theorem, which prevents an arbitrary qubit from being copied. 3. No-deleting theorem, which prevents an arbitrary qubit from being deleted. 4. No-broadcast theorem, although a single qubit can be transported from place to place (e.g., via quantum teleportation), it cannot be delivered to multiple recipients. 5. No-hiding theorem, which demonstrates the conservation of quantum information. These theorems prove that quantum information within the universe is conserved. They open up possibilities in quantum information processing. Representation of a Bloch sphere (Source) So, that was all some insight into quantum computers. Leave a Comment
{"url":"https://atomstalk.com/blogs/quantum-computers-explained/","timestamp":"2024-11-09T07:25:26Z","content_type":"text/html","content_length":"172882","record_id":"<urn:uuid:804c6121-2265-48ed-a6ae-a9e441c2adb8>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00064.warc.gz"}
Lines, Rays, and Line Segments Worksheets Our printable worksheets on points, lines, rays, and line segments are a focused and highly beneficial resource on developing knowledge and understanding of these four fundamental concepts of geometry. Students will be able to differentiate, identify, name, draw rays, lines, and line segments and also answer concept-based questions. Start your practice with our free worksheets! Identifying Points, Lines, Rays, and Line Segments Lay a solid ground for geometry with our pdf worksheets, where grade 4 and grade 5 children acquire a clear idea of points, rays, lines, and line segments. Naming Lines, Rays, and Line Segments Learn to differentiate between a ray, a line, or a line segment and denote them using specific symbols with our printable worksheets that provide all the needful learning and practice. Drawing Lines, Rays, and Line Segments Children in 4th grade and 5th grade revisit concepts and gain considerable practice in connecting the points to draw either a line, a ray, or a line segment by taking a hint from the symbol Deciphering Lines, Rays, and Line Segments Get ready to answer a bunch of questions pertaining to lines, line segments, rays, opposite rays, end points, common point and much more by observing the given figure. Identifying Lines, Rays, and Line Segments Children in grade 4 and grade 5 apply their knowledge of lines, rays, and line segments to identify the figures in these printable worksheets and choose the correct option. Lines, Rays, and Line Segments | Charts Pore over these charts, packed with vivid definitions and symbolic and diagrammatic representations, to introduce kids to points, lines, rays, and line segments.
{"url":"https://www.tutoringhour.com/worksheets/line-ray-line-segment/","timestamp":"2024-11-11T14:25:19Z","content_type":"text/html","content_length":"76325","record_id":"<urn:uuid:baec42e1-cd3b-432d-bb59-938dd217b1ce>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00770.warc.gz"}
Decimal Versus Binary Representation of Numbers in Computers The dichotomy created by the advent of computers and brought up by the title of this column was a major quandary in the early days of the computer revolution, causing major controversy in both the academic and commercial communities involved in the development of modern computer architectures. Even though the controversy was eventually decided (in favor of binary representation; all commercially available computers use a binary internal architecture), echoes of that controversy still affect computer usage today by creating errors when data is transferred between computers, especially in the chemometric world. A close examination of the consequences reveals a previously unexpected error source. So what’s the cause for all the furor? Basically, it’s because the “natural” smallest piece of internal hardware of a computer (a “bit”—contraction of a binary digit) has two states: “on” and “off.” The literature assigns various name pairs to those states: on/off; high/ low; true/false, or one of the various others depending on the application. The name pair of special interest to us here is “1 /0.” Conceptually, the 1/0 dichotomy can be used to implement several different underlying concepts, including Boolean logic states and numbers. Regardless of this interpretation, the state of the circuit is represented by an actual voltage at the output of the circuit. Typically, the state “0” would correspond to an actual zero voltage at the output, whereas a “1” state would represent some non-zero voltage; five volts is the common value. Actual hardware can transiently have intermediate values while changing from one state to another. As an example, when analyzing the function the hardware performs, intermediate voltages are considered disallowed; in actual hardware, intermediate voltages only occur briefly during a change of state. Hardware can be devised to create basic Boolean logic functions such as “AND,” “OR,” and “NOT.” Combinations of those basic functions can be, and are, used to create more complicated logic functions, and they can also perform functions George Boole never conceived of as long as the output signal of the hardware is a voltage that corresponds to either a “zero” or a “one.” So much for interpreting 1/0 as logic. What we want to know is how that same concept is used to represent numbers. It is very straightforward; it’s done the same way that we do with decimal numbers. We call our usual number system “decimal,” which is also called base-10 because it consists of 10 digits: “0,” “1,” “2,” “3,” “4,” “5,” “6,” “7,” “8,” and “9” (10 different symbols representing the 10 digits). In contrast, binary (base-2) numbers are only allowed two states, with symbols representing the two states, the zero and one described above. As a short aside, mathematicians have described number systems with different numbers of states (and corresponding symbols). For example, in a ternary number system, a digit can have three states (“0”, “1”, and “2”), although the issue with this system is that it is not so easily implemented in hardware. That’s one reason you never see it in day-to-day use. Donald Knuth discusses some of the more exotic number systems mathematicians have devised in his masterwork (see section 1.2.2 of [1]), such as number systems with fractional and even irrational bases (“There is a real number, denoted by e = 2.718281828459045..., for which the logarithms have simpler properties” (see page 23 in [1]). That means that e, a number that is not only irrational, but transcendental, can be the base of a number system. But we won’t go there today.). In fact, some of the alternative number systems are sometimes seen in computer applications. In particular, the octal (base-8) and the hexadecimal (base-16) number systems are sometimes used in special computer applications and for pedagogical purposes in instructional texts. Octal- and hexadecimal systems are particularly well-suited to being used for representing numbers in computers by virtue of the fact that they are both powers of two. Thus, they can easily be converted to or from their binary representation. Octal numbers can be expressed simply by taking the bits of a binary number in groups of three, and hexadecimal numbers can be expressed by taking the bits of a binary number in groups of four. Conversion to other bases is more complicated and we leave that for another time. Sometimes, we need to accommodate another limitation of the number systems we use. For example, all number systems have one and the same limitation—the number of symbols that are used to represent numbers is finite and limited. So how do we represent numbers higher than those that can be represented with a single symbol? For example, the binary system only allows for two states, so how can we count to values higher than 1? Again, the answer is in a similar fashion to how we do it in the decimal number system. And how is that? The answer is that we use positional information to enable counting beyond the limit of the number of symbols available. When we’ve “used up” all the symbols available to us in one place of the number, we add another place, and in it we put the number of times we’ve “run through” all the available symbols in the next lower place. Regardless of the number system, another limitation is that there are only a finite number of symbols to represent the digits, so how can we keep counting to higher numbers when we’ve “used up” the available symbols? Of all the number systems we’ve mentioned, one of them encounters this problem—the hexadecimal system, which requires 16 different symbols, six more than our usual numbers do. The solution generally applied is to co-opt some of the alphabetic characters as numerical digits; typically the letters a–f are used to represent digits with values of 10 to 15. To see how this all works, Table I shows how the first 20 numbers (plus zero) are represented in each of these various number systems. Online utilities are available to convert from one number base to another (an example is in the literature [2]). It is obvious that there is danger of confusion because the same symbol combination could represent different numbers depending on which number system it is being used in. In cases where such confusion could arise, the number system is indicated with a subscript after the number. Thus, for example, 100[2] = 4[10] (read as: “one-oh-oh in base two equals four in base ten”), whereas 100[3] = 9[10] (read as “one-oh-oh in base three equals nine in base 10,” as we can see from Table I), and the subscripts tell us which is which. How do we then interpret the subscripts? Those are decimal! Numbers in Computers Once they are constructed, computers have no problem dealing with any of those systems of numbers, although some are more complicated for engineers to set up than others. However, people have problems with them. For example, consider one very simple question: “is ef7da1[16] greater or less than 1100101110101000010[2]?” I’m sure that any of our readers could, with enough time and effort, work out the answer to that question, but the point is that it doesn’t come easily; certainly, it’s not as easy as answering the similar question “Is 5[10] greater or less than 13[10]?” We’re used to working with decimal numbers. At the beginning of the computer revolution and even in the times before then when “computers” meant multi-million-dollar devices attended by “high priests” (as they were perceived at the time), the question of how to represent numbers inside a computer arose, and it was an important question. Back then, computers were used to perform calculations involving multi-million-dollar business transactions (at a time when a million dollars was worth something!), and the results needed to be checked to ensure that the computers were working properly and providing correct answers. Only people could do that, and they had to do it using decimal representation. That created a strong incentive to have the computers also use decimal representation so that the computers’ internal numbers could easily and directly be compared with the manual computations. Eventually, the need for people to comprehend what the computers were doing was addressed with a hybrid approach called binary-coded decimal (BCD). Numerical information was maintained internally as decimal values, where each digit 0–9 was expressed individually in binary. This process required only four bits per digit, just like hexadecimal, but only 10 of the 16 possible combinations were allowed to be expressed. This process allowed combinations to be put into a one-to-one correspondence with the ten decimal digits. This scheme made it easier for humans to interpret, but it carried some disadvantages: 1) it increased the complexity of each digit’s circuitry to prevent it from entering a disallowed state, which made computers more expensive to construct and more subject to breakdowns; 2) arithmetic circuits were also more complicated to enable decimal instead of binary operations; and 3) the circuitry for all four bits for each digit had to be present for every decimal digit throughout the computer, even though only 10 of the possible 16 combinations of those four bits were used. As a result, it was a very inefficient use of the hardware. To represent numbers of up to (say) 100,000[10], all components, wiring, controls, memory, and interconnects required 6 x 4 = 24 bits of (more complicated) hardware for BCD numbers compared to only 17 bits for binary numbers. The discrepancy increased as the magnitude of the numbers that needed to be accommodated increased. In the days before large-scale integration techniques were available, that allowed an entire computer to be fabricated on a single silicon chip, which imposed a substantial cost penalty on computer manufacturing because the extra bits had to be included in the control, processing, memory, and every other part of the computer. Some “tricks” were available to reduce this BCD penalty, but the “tricks” often had the side-effect of exchanging higher hardware cost for slower computation speed, and it was still more expensive to build a computer for BCD numbers than for binary numbers. In fact, remnants of BCD number representations still persist in the form of any device that has a built-in numeric display. Typically, those consist of a seven-segment LED (or other electro-optical technology) are arranged so that activating appropriate segments allow any numeral from zero to nine to be displayed. Although often combined into a single circuit, that requires conceptually two stages of decoding to implement. For each digit displayed, four binary bits are decoded to one of the 10 decimal digits, then each of those is decoded to determine which of the LED segments needed to be activated. Where Are We Today? In modern times, computers are based on micro-controllers that often include, as mentioned above, an entire computer on the silicon integrated circuit (the “chip”). Because the “fab” needed to construct such a computer costs several millions of dollars, efficiency in using silicon “real estate” is of paramount importance, so modern computers use binary numbers for their internal operations, whereas input (accepting data), and output (providing results), both of which involve human interaction, are done using decimal. With the increases in computer speed and reliability achieved over the years, the conversions between the two domains is performed via the software, bypassing the older problems previously encountered. The Problem However, we’re not out of the woods yet! We have not yet accounted for all the numbers and types of numbers we expect our computers to deal with. In particular, ordinary scientific measurements need to deal with very small (for example, the size of a proton is 8 x 10^-16 meters) and very large quantities (the universe is (as of this writing) known to be roughly 92 billion (9.2 x 10^10) light years in diameter and 14 billion (1.4 x 10^10) years old; a mole consists of 6 x 10^23 atoms, and so forth). Ordinary numbers were insufficient to deal with the need to represent these very small and very large quantities. As a result, scientific notation was devised to help us express these extreme quantities, as I just did at the beginning of this paragraph. There is a similar problem in computer expression of numbers—how to deal with very large and very small numbers. A separate but related problem is how to represent fractions; note that the examples and number systems described above all deal with integers. However, computers used in the “real world” have to deal with fractional values as well as very large and small ones. The computer community has dealt with that problem by devising a solution analogous to scientific notation. Computers generally recognize two types of numbers, maintained by the software and independent of the number base used by the hardware. These are designated “integers” and “floating point” numbers. All the number systems we’ve discussed above, regardless of the number base, are integers. Although a mathematician might disagree with this definition, for our purposes here we consider “integers” to mean the counting numbers (as in Table I). Integers are generally limited to values from zero to a maximum determined by the number of bits in a computer “word” (which is hardware-dependent) and their negatives, regardless of the number base. Table I tells us (almost) everything else we need to know right now about integers. The other type of numbers, which are recognized and used internally by computers, is analogous to scientific notation and called “floating point” numbers. Floating point numbers come in a variety of implementations, which are designated Float-16, Float-32, Float-64, and so forth, depending on how many bits of memory are allocated by the computer software to each “floating point” number. Generally, that is a small multiple of the size of a computer “word,” which is the number of bits, determined by the hardware, that the computer can handle simultaneously (and not incidentally, usually also determines the maximum size of an integer). Generally, each number is divided into four parts: an exponent; a mantissa; sign of the exponent; and sign of the mantissa. Because different manufacturers could split up the parts of a floating point number in a variety of ways, this scheme could—and did—lead to chaos and incompatibility problems in the industry, until the Institute of Electrical and Electronic Engineers (IEEE), a standards-setting organization for engineering disciplines, stepped in (3). Comparable to the American Society for Testing and Materials (ASTM) (4,5), IEEE created a standard (see IEEE-754) (6) for the formats of numbers used inside computers. The standard defines two types of floating point numbers, single precision and double precision. Each is defined by splitting a number into two main parts, an exponent and a mantissa. Per the standard, the 32 bits of a single-precision number allocate 23 bits for the mantissa of the number, which gives a precision equivalent to 6–9 decimal digits depending on the magnitude of the number. Double precision uses 64 bits to represent a number and actually provides more than double that of single precision, the 52-bit mantissa of an IEEE double-precision number is equivalent to a precision of approximately 16 decimal digits. However, we’re still not out of the woods. There’s no flexibility in the binary representation of data in the computer, especially if they are, in fact, IEEE single-precision numbers, but a problem arises when someone is careless when changing the representation to decimal for external use, such as transferring the data to another computer, displaying it for people to read, or performing computations on it after conversion to decimal. It doesn’t matter what the data is; it’s a fundamental problem of number representations. Converting the internal values in your computer to a format with an insufficient number of decimal digits is the underlying source of a problem, and that will affect the results of any further calculation performed on that data. The IEEE specifications include the following proviso: if a decimal string with at most six significant digits is converted to IEEE 754 single-precision representation and then converted back to a decimal string with the same number of digits, the final result should match the original string. Similarly, if an IEEE 754 single-precision number is converted to a decimal string with at least nine significant digits, and then converted back to single-precision representation, the final result must match the original number (3). So when you write out your data to a file and the data is written as decimal numbers, the data stored in the file will have an insufficient number of decimal digits of precision, and contain neither the six-and-a-half digits worth that the internal binary representation of the data contains nor the nine that is recommended. When that data is read into another computer, the binary number generated in the second computer is not the same as the original binary number that gave rise to it. Performing computations, such as a derivative or other processing, does not fix the problem, because the new data will also be subject to the same limitations. Thus, transferring data to a different computer so that a different program can work on it may not give the same results as performing the same computations on the original data. This discussion is pertinent to some data-transfer standards. For example, JCAMP-DX (7) is a popular format for computer exchange of spectral data. The JCAMP-DX standard does not address the question of the precision of the data being handled; it permits data to be stored using an arbitrary number of digits to represent the original data. Therefore, an unwary user of JCAMP-DX may use an insufficient number of decimal digits to store spectral data in a JCAMP-DX file and later be surprised by the results obtained after the data has been transferred. For example, if the data is to be used for chemometric calibration purposes (for example, multiple linear regression [MLR] or principal component regression [PCR]), different results will be obtained from data after it has been transferred to a second computer than if it was obtained from the original data in the original computer. How can this be fixed? As per the discussion above, any external decimal representation of the data must be formatted as a decimal string of at least nine decimal digits. Currently, examples of JCAMP-DX files I’ve seen sometimes use as few as five decimal digits. I have seen JCAMP-DX files written by current software packages that wrote the spectral data using as few as six or seven decimal digits, which is better but still short of the nine required. I am not completely convinced that nine digits is sufficient, although at least it errs on the side of safety. The reasons for my doubt are explained in the appendix below. Therefore, rewriting the code you use to export the data, so that it writes the data out with nine significant digits, is a minimal requirement on what needs to be done. Any other programs that read in that data file must also accept and properly convert those digits. The below is a demonstration of the need for sufficient digits in the decimal representation of numbers converted from internal binary to decimal. As in any number system, numbers are represented by the sum of various powers of the base of the number system. In a binary system as used in computers, the base is 2. Integers (counting numbers ≥1) are represented as the sum of powers of 2 with positive exponents: N = A[0]2^0 + A[1]2^1 + A[2]2^2 + A[3]2^3 + ...In the binary system, the multiplier, A[i], can only have values of 0 or 1; the corresponding power of two is then either in or not in the representation of the number. Floating point numbers (for example, IEEE-754 single precision) are represented by binary fractions; that is, numbers less than 1 that consist of the sum of powers of 2 with negative exponents (for example, A[1]2^-1 + A[2]2^-2 + A[3]2^-3 + ...), where again, the various A[i] can only have values of 0 or 1. When we convert this representation to decimal, each binary digit contributes to the final result an amount equal to its individual value. Table II presents a list of the exact decimal values corresponding to each binary bit. In Table II, these decimal values are an exact representation of the corresponding binary bit. If a conversion of a number from binary to decimal does not include sufficient digits in the representation, then it does not accurately represent the binary number. When the number is converted back to binary in the receiving computer, that number will not be the same as the corresponding number in the original. We discuss this point further Numbers greater than 1 are represented by multiplying the mantissa (from Table II) by an appropriate power of two, which is available as the “exponent” of the IEEE-754 representation of the number. You should be careful when decoding the exponent because there are a couple of “gotchas” that you might run into; those are explained in the official standard (available as a PDF file online), in discussions of the standard, and in the literature (8) that provide details for creating and decoding the exponent, as well as the binary number comprising the mantissa. There are several takeaways from Table II: first, we see that the number of decimal digits needed to exactly express the value of the number (including the leading zeroes) equals the power to which 2 is raised. We also see a repeating pattern: beyond 2^-3, the last three digits of the decimal number repeat the sequence ...125, ...625, ...125.... From Table II, we also see that the statement “single precision corresponds to (roughly) 6.5 digits” (as derived from the IEEE-754 standard [3]) means that by virtue of the leading string of zeros in the decimal-conversion value, any bits beyond the 23rd bit will not affect the value of the decimal number to which binary representation is converted. Table II demonstrates that property because all of the first seven digits of the decimal equivalent are zero at and beyond the 24th bit. Therefore, when added into the 6-decimal-digit equivalent of the binary representation, the sum would not be affected. At any stage, the exact decimal representation of a binary number requires exactly the same number of decimal digits to represent the corresponding binary fraction as the exponent of that binary digit. One point of this exercise was to demonstrate that because these representations of the binary digits are what must be added to give an exact representation of the original binary number, it therefore requires as many decimal digits to represent the binary number as there are bits in that binary number. Errors in Representation Above, we alluded to the fact that floating-point numbers are inexact, and that the degree to which they are inexact depends on the number base. For example, the decimal number 0.3[10] cannot easily be represented in the binary system. 2^-1[10] = 0.5[10] is already too large and 2^-2[10] = 0.25 is too small. We can add smaller increments to approach 0.3 more closely. For example, 0.011[2] = 2^-2 [10] + 2^-3[10] = 0.25[10] + 0.125[10] = 0.375[10], which is again too large. And 0.0101[2] = 0.3125[10] is still too large, while 0.01001[2] = 0.28125[10] is again too small. We’re closing in on 0.3 [10], but clearly it’s not simple. Using an online decimal-to-binary converter (2) reveals that 0.01001100110011...[2] is the unending binary approximation to 0.3[10] but any finite-length binary approximation is inevitably still in error. Different number bases are not always easily compatible. This issue becomes particularly acute when converting numbers from the binary base inside a computer to a decimal base for transferring data to another computer, and then back again. Ideally, the binary value received and stored by the recipient computer should be identical to the binary value sent by the initial computer. For some data transfer protocols, for example, JCAMP-DX allows for varying precision of the decimal numbers used by the transfer. In Table III, we present the results of a computer experiment that emulates this process while examining the effect of truncating the decimal number used in the transfer (perhaps because of insufficient numbers of decimal digits being specified as the format). We picked a single number to represent the numbers that might be transferred and expressed it in decimal and hexadecimal numbers; the hex number represents the internal number of the transmitting computer. The decimal number was successively truncated to emulate the effect of different precision during the transfer and the truncated value used, to learn what the receiving computer receives (the value recovered). The test number we used for this experiment was pi—a nice round number! We used Matlab to convert decimal digits to hexadecimal representation. Unfortunately, Matlab has no provision to allow entry of hexadecimal numbers, so an on-line hex-to-decimal software program was used to implement the hex-to-decimal conversions. The results in Table III are enlightening. Most users of JCAMP software packages (and probably other spectrum-handling software as well) do not provide enough decimal digits in the representation of the spectral data that is transferred to ensure that spectra transferred from one to computer to another are identical on both computers. Therefore, any further computations the computers execute will give different answers because of the use of different data values, which is particularly pernicious in the case of “derivatives,” where the computation inherently provides the results of small differences between large values. This pushes the (normally negligible) errors of the computation into the more significant figures of the results. (1) D.E. Knuth, The Art of Computer Programming (Addison-Wesley, Menlo Park, CA, 1981). (2) D. Wolff, Base Convert: The Simple Floating Base Converter (accessed September 2022). https://baseconvert.com/. (3) IEEE, IEEE Society (accessed September 2022). https://www.IEEE.org. (4) H. Mark, NIR News 20(5), 14–15 (2009). (5) H. Mark, NIR News 20(7), 22–23 (2009). (6) Wikipedia, IEEE 754 (accessed September 2022). https://en.wikipedia.org>wiki>ieee_754. (7) R. MacDonald and P. Wilks, Appl. Spectrosc. 42(1), 151 (1988). (8) IEEE Computer Society, IEEE Standard for Floating-Point Arithmetic, IEEE Std 754TM-2008 (IEEE, New York, NY, 2008). https://irem.univreunion.fr/IMG/pdf/ieee-754-2008.pdf
{"url":"https://www.spectroscopyonline.com/view/decimal-versus-binary-representation-of-numbers-in-computers","timestamp":"2024-11-13T14:36:29Z","content_type":"text/html","content_length":"441217","record_id":"<urn:uuid:4d5b0edf-ae79-45bc-9845-b53e6f855b82>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00208.warc.gz"}
Lesson 4 Equations and Their Solutions 4.1: What is a Solution? (5 minutes) This warm-up prompts students to recall what they know about the solution to an equation in one variable. Students interpret a given equation in the context of a situation, explain why certain values are not solutions to the equation, and then find the value that is the solution. Student Facing A granola bite contains 27 calories. Most of the calories come from \(c\) grams of carbohydrates. The rest come from other ingredients. One gram of carbohydrate contains 4 calories. The equation \(4c + 5= 27\) represents the relationship between these quantities. 1. What could the 5 represent in this situation? 2. Priya said that neither 8 nor 3 could be the solution to the equation. Explain why she is correct. 3. Find the solution to the equation. Activity Synthesis Focus the discussion on how students knew that 8 and 3 are not solutions to the equation and how they found the solution. Highlight strategies that are based on reasoning about what values make the equation true. Ask students: "In general, what does a solution to an equation mean?" Make sure students recall that the solution to an equation in one variable is a value for the variable that makes the equation a true statement. 4.2: Weekend Earnings (15 minutes) In this activity, students write equations in one variable to represent the constraints in a situation. They then reason about the solutions and interpret the solutions in context. To solve the equation, some students may try different values of \(h\) until they find one that gives a true equation. Others may perform the same operations to each side of the equation to isolate \ (h\). Identify students who use different strategies and ask them to share later. Arrange students in groups of 2 and provide access to calculators. Give students a few minutes of quiet work time, and then time to discuss their responses. Ask them to share with their partner their explanations for why 4 and 7 are or are not solutions. If students are unsure how to interpret “take-home earnings,” clarify that it means the amount Jada takes home after paying job-related expenses (in this case, the bus fare). Speaking, Reading: MLR5 Co-Craft Questions. Use this routine to help students interpret the language of writing equations, and to increase awareness of language used to talk about representing situations with equations. Display only the task statement that describes the context, without revealing the questions that follow. Invite students to discuss possible mathematical questions that could be asked about the situation. Listen for and amplify any questions involving equations that connect the quantities in this situation. Design Principle(s): Maximize meta-awareness; Support sense-making Action and Expression: Internalize Executive Functions. Chunk this task into more manageable parts for students who benefit from support with organizational skills in problem solving. Check in with students after the first 2–3 minutes of work time. Invite 1–2 students to share how they determined an equation that represents Jada’s take-home earnings. Record their thinking on a display and keep the work visible as students continue to work. Supports accessibility for: Organization; Attention Student Facing Jada has time on the weekends to earn some money. A local bookstore is looking for someone to help sort books and will pay $12.20 an hour. To get to and from the bookstore on a work day, however, Jada would have to spend $7.15 on bus fare. 1. Write an equation that represents Jada’s take-home earnings in dollars, \(E\), if she works at the bookstore for \(h\) hours in one day. 2. One day, Jada takes home $90.45 after working \(h\) hours and after paying the bus fare. Write an equation to represent this situation. 3. Is 4 a solution to the last equation you wrote? What about 7? □ If so, be prepared to explain how you know one or both of them are solutions. □ If not, be prepared to explain why they are not solutions. Then, find the solution. 4. In this situation, what does the solution to the equation tell us? Student Facing Are you ready for more? Jada has a second option to earn money—she could help some neighbors with errands and computer work for $11 an hour. After reconsidering her schedule, Jada realizes that she has about 9 hours available to work one day of the weekend. Which option should she choose—sorting books at the bookstore or helping her neighbors? Explain your reasoning. Anticipated Misconceptions If students struggle to write equations in the first question, ask them how they might find out Jada's earnings if she works 1 hour, 2 hours, 5 hours, and so on. Then, ask them to generalize the computation process for \(h\) hours. Activity Synthesis Ask a student to share the equation that represents Jada earning $90.45. Make sure students understand why \(90.45 = 12.20h - 7.15\) describes that constraint. Next, invite students to share how they knew if 4 and 7 are or are not solutions to the equation. Highlight that substituting those values into the equation and evaluating them lead to false Then, select students using different strategies to share how they found the solution. Some students might notice that the solution must be greater than 7 (because when \(h=7\), the expression \ (12.20h-7.15\) has a value less than 90.45) and start by checking if \(h=8\) is a solution. If no students mention this, ask them about it. Make sure students understand what the solution means in context. Emphasize that 8 is the number of hours that meet all the constraints in the situation. Jada gets paid $12.20 an hour, pays $7.15 in bus fare, and takes home $90.45. For all of these to be true, she must have worked 8 hours. 4.3: Calories from Protein and Fat (15 minutes) In the previous activity, students recalled what it means for a number to be a solution to an equation in one variable. In this activity, they review the meaning of a solution to an equation in two Give students continued access to calculators. Student Facing One gram of protein contains 4 calories. One gram of fat contains 9 calories. A snack has 60 calories from \(p\) grams of protein and \(f\) grams of fat. The equation \(4p+9f = 60\) represents the relationship between these quantities. 1. Determine if each pair of values could be the number of grams of protein and fat in the snack. Be prepared to explain your reasoning. 1. 5 grams of protein and 2 grams of fat 2. 10.5 grams of protein and 2 grams of fat 3. 8 grams of protein and 4 grams of fat 2. If there are 6 grams of fat in the snack, how many grams of protein are there? Show your reasoning. 3. In this situation, what does a solution to the equation \(4p+9f = 60\) tell us? Give an example of a solution. Activity Synthesis The goal of the discussion is to make sure students understand that a solution to an equation in two variables is any pair of values that, when substituted into the equation and evaluated, make the equation true. Discuss questions such as: • “In this situation, what does it mean when we say that \(x=12\) and \(y=1.5\) are not solutions to the equation?” (They are not a combination of protein and fat that would produce 60 calories. Substituting them for the variables in the equation leads to a false equation of \(61.5=60\).) • “How did you find out the grams of protein in the snack given that there are 6 grams of fat?” (Substitute 6 for \(y\) and solve the equation.) • “Can you find another combination that is a solution?” • “How many possible combinations of grams of protein and fat (or \(x\) and \(y\)) would add up to 60 calories?” (Many solutions) As a segue to the next lesson, solicit some ideas on how we know that there are many solutions to the equation. If no one mentions using a graph, bring it up and tell students that they will explore the graphs of two-variable equations next. Lesson Synthesis To summarize the lesson, refer back to the activity about protein and fat. Remind students that a gram of protein has 4 calories and a gram of fat has 9 calories. Discuss questions such as: • "What does the equation \(4x + 9y=110\) tell us about the calories in a snack?" (It has 110 calories from some grams of protein and some grams of fat.) • "In this situation, what does it mean to solve the equation?" (To find the combination of grams of protein and fat that produce 110 calories.) • "Is the combination of 11 grams of protein and 5 grams of fat a solution to the equation? Why or why not?" (No, they don't add up to 110 calories. Substituting 11 for \(x\) and 5 for \(y\) into the equation doesn't lead to a true equation.) • "Consider the equation \(4(5) + 9y=110\). What does it tell us about the snack?" (The snack has 5 grams of protein and a total of 110 calories.) • "What does it mean to solve this equation?" (To find the grams of fat that, when combined with 5 grams of calories, give a total of 110 calories. To find the value of \(y\) that would make the equation true.) 4.4: Cool-down - Box of T-shirts (5 minutes) Student Facing An equation that contains only one unknown quantity or one quantity that can vary is called an equation in one variable. For example, the equation \(2\ell + 2w = 72\) represents the relationship between the length, \(\ell\), and the width, \(w\), of a rectangle that has a perimeter of 72 units. If we know that the length is 15 units, we can rewrite the equation as: \(2(15) + 2w = 72\). This is an equation in one variable, because \(w\) is the only quantity that we don't know. To solve this equation means to find a value of \(w\) that makes the equation true. In this case, 21 is the solution because substituting 21 for \(w\) in the equation results in a true statement. \(\begin {align}2(15) + 2w &=72\\ 2(15)+2(21) &= 72\\ 30 + 42 &=72\\ 72&=72 \end{align}\) An equation that contains two unknown quantities or two quantities that vary is called an equation in two variables. A solution to such an equation is a pair of numbers that makes the equation true. Suppose Tyler spends \$45 on T-shirts and socks. A T-shirt costs \$10 and a pair of socks costs \$2.50. If \(t\) represents the number of T-shirts and \(p\) represents the number of pairs of socks that Tyler buys, we can can represent this situation with the equation: \(10t + 2.50p = 45\) This is an equation in two variables. More than one pair of values for \(t\) and \(p\) make the equation true. \(t=3\) and \(p=6\) \(\begin {align} 10(3) + 2.50(6) &= 45\\ 30 + 15 &=45\\ 45&=45 \end{align}\) \(t=4\) and \(p=2\) \(\begin {align} 10(4) + 2.50(2) &= 45\\ 40 + 5 &=45\\ 45&=45 \end{align}\) \(t=2\) and \(p=10\) \(\begin {align} 10(2) + 2.50(10) &= 45\\ 20 + 25 &=45\\ 45&=45 \end{align}\) In this situation, one constraint is that the combined cost of shirts and socks must equal \$45. Solutions to the equation are pairs of \(t\) and \(p\) values that satisfy this constraint. Combinations such as \(t=1\) and \(p = 10\) or \(t=2\) and \(p=7\) are not solutions because they don’t meet the constraint. When these pairs of values are substituted into the equation, they result in statements that are false.
{"url":"https://im-beta.kendallhunt.com/HS/teachers/1/2/4/index.html","timestamp":"2024-11-04T10:31:58Z","content_type":"text/html","content_length":"93596","record_id":"<urn:uuid:72826789-645d-4c88-be10-59d17b707037>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00644.warc.gz"}
Space-time model hospital admissions from Initial model: Total hospital admissions We start the analysis with a model for the total number of hospital admissions over time in Brazil, i.e., a temporal model that does not consider the hospital admissions on each state. From the figure above we can see that there is a consistent trend of (log) linear decay of the rate of hospital admissions over time, until April of 2020, when there is an abrupt reduction of hospital admission due to the pandemic of COVID-19 (Ribeiro et al., 2022), which is then followed by what seems to be a return to the previous level, although more observations would be necessary to be sure. We can also note that the data has a clear seasonal pattern, with a period of \(12\) months. We begin with a very simply model. Let \(Y_t\) be the total number of hospital admissions on Brazil at time \(t\). Assume that: \[ \begin{aligned} Y_t|\eta_t &\sim Poisson(\eta_t)\\ \ln{\eta_t}&=\lambda_t=\theta_{1,t}\\ \theta_{1,t}&=\theta_{1,t-1}+\theta_{2,t-1}+\omega_{1,t}\\ \theta_{2,t}&=\theta_{2,t-1}+\omega_{2,t}.\\ \ end{aligned} \] First we define the model structure: Then we define the outcome: Then we fit the model: Finally, we can see how our model performed with the and methods: Fitted DGLM with 1 outcomes. Series.1: Poisson No static coeficients. See the coef.fitted_dlm for the coeficients with temporal dynamic. One-step-ahead prediction Log-likelihood : -24560.92 Interval Score : 52492.80142 Mean Abs. Scaled Error: 1.41602 Clearly the model described above is too simple to describe the data. In particular, it does not take into account any form of seasonal pattern. Let us proceed then by assuming the following model: \[ \begin{aligned} Y_t|\eta_t &\sim Poisson(\eta_t)\\ \ln{\eta_t}&=\lambda_t=\theta_{1,t}+\theta_{3,t}\\ \theta_{1,t}&=\theta_{1,t-1}+\theta_{2,t-1}+\omega_{1,t}\\ \theta_{2,t}&=\theta_{2,t-1}+\ omega_{2,t},\\ \begin{bmatrix}\theta_{3,t}\\\theta_{4,t}\end{bmatrix}&=R\begin{bmatrix}\theta_{3,t}\\\theta_{4,t}\end{bmatrix}+\begin{bmatrix}\omega_{3,t}\\\omega_{4,t}\end{bmatrix}\\ R&=\begin {bmatrix} \cos(\frac{2\pi}{12}) &\sin(\frac{2\pi}{12})\\ -\sin(\frac{2\pi}{12}) & \cos(\frac{2\pi}{12})\end{bmatrix} \end{aligned} \] Where \(R\) is a rotation matrix with angle \(\frac{2\pi}{12}\), such that \(R^{12}\) is equal to the identity matrix. To define the structure of that model we can use the harmonic_block function alongside the polynomial_block function: structure <- polynomial_block( rate = 1, order = 2, D = c(0.95, 0.975), name = "Trend" ) + rate = 1, period = 12, D = 0.98, name = "Season" Then we fit the model (using the previously defined outcome): Fitted DGLM with 1 outcomes. Series.1: Poisson No static coeficients. See the coef.fitted_dlm for the coeficients with temporal dynamic. One-step-ahead prediction Log-likelihood : -17339.1 Interval Score : 41481.46809 Mean Abs. Scaled Error: 0.70882 Notice that this change significantly improves all metrics provided by the model summary, which indicates that we are going in the right direction. We encourage the reader to test different orders for the harmonic block. The previous model could capture the mean behavior of the series reasonably well. However, two deficiencies of that model standout: first, the overconfidence in the predictions, evidenced by the particularly thin credibility interval; nnd second, the difficulty the model had to adapt to the pandemic period. The first problem comes from the fact that we are using a Poisson model, which implies that \(Var[Y_t|\eta_t]=\mathbb{E}[Y_t|\eta_t]\), which means that \(Var[Y_t]=\mathbb{E}[Var[Y_t|\eta_t]]+Var[\ mathbb{E}[Y_t|\eta_t]]=\mathbb{E}[\eta_t]+Var[\eta_t]\). For latter observations we expect \(Var[\eta_t]\) to be relatively small; as such, the variance of \(Y_t\) should be very close to its mean after a reasonable amount of observations. In this scenario, the coefficient of variation, defined as \(\frac{\sqrt{Var[Y_t]}}{\mathbb{E}[Y_t]}\) goes to \(0\) as \(\mathbb{E}[Y_t]\) grows, in particular, for data in the scale we are working with in this particular problem, we would expect a very low coefficient of variation if the Poisson model were adequate, but that is not what we observe. This phenomena is called and is a well known problem in literature . To solve it, we can include a block representing a white noise that is added to the linear predictor at each time, but does not affect previous or future observation, so as to capture the overdipersion. In this case, we will assume the following model: \[ \begin{aligned} Y_t|\eta_t &\sim Poisson(\eta_t)\\ \ln{\eta_t}&=\lambda_t=\theta_{1,t}+\theta_{3,t}+\epsilon_t\\ \theta_{1,t}&=\theta_{1,t-1}+\theta_{2,t-1}+\omega_{1,t}\\ \theta_{2,t}&=\theta_ {2,t-1}+\omega_{2,t},\\ \begin{bmatrix}\theta_{3,t}\\\theta_{4,t}\end{bmatrix}&=R\begin{bmatrix}\theta_{3,t}\\\theta_{4,t}\end{bmatrix}+\begin{bmatrix}\omega_{3,t}\\\omega_{4,t}\end{bmatrix}\\ \ epsilon_t & \sim \mathcal{N}(0,\sigma_t^2) \end{aligned} \] This structure can be defined using the function, alongside the previously used functions: structure <- polynomial_block( rate = 1, order = 2, D = c(0.95, 0.975), name = "Trend" ) + rate = 1, period = 12, D = 0.98, name = "Season" ) + noise_block(rate = 1, name = "Noise") For the second problem, that of slow adaptation after the start of the pandemic. The ideal approach would be to make an intervention, increasing the uncertainty about the latent states at the beginning of the pandemic period and allowing our model to quickly adapt to the new scenario (see West and Harrison, 1997, Chapter 11). We recommend this approach when we already expect a change of behavior in a certain time, even before looking at the data (which is exactly the case). Still, for didactic purposes, we will first present how the automated monitoring can also be used to solve this same problem. In general, we recommend the automated monitoring approach when we do not known if or when a change of behavior happened before looking at the data, i.e., we do not known of any particular event that we expect to impact our outcome. Following what was presented in the Subsection Intervention and monitoring, we can use the following code to fit our model: structure <- polynomial_block( rate = 1, order = 2, D = c(0.95, 0.975), name = "Trend", monitoring = c(TRUE, TRUE) ) + rate = 1, period = 12, D = 0.98, name = "Season" ) + noise_block(rate = 1, name = "Noise") Notice that we set the monitoring of the polynomial_block to c(TRUE,TRUE). By default, the polynomial_block function only activates the monitoring of its first component (the level), but, by the visual analysis made at the beginning, it is clear that the pandemic affected both the level and the slope of the average number of hospital admissions, as such, we would like to monitor both # To activate the automated monitoring it is enough to set the p.monit argument to a valid value fitted.model <- fit_model(structure, outcome, p.monit = 0.05) Fitted DGLM with 1 outcomes. Series.1: Poisson No static coeficients. See the coef.fitted_dlm for the coeficients with temporal dynamic. One-step-ahead prediction Log-likelihood : -1283.634 Interval Score : 14965.54610 Mean Abs. Scaled Error: 0.71239 The summary presented above shows a massive improvement in the comparison metrics with the new changes introduced. Moreover, we can see that the automated monitoring detected the exact moment where the series \(Y_t\) changed behavior, which allowed the model to immediately adapt to the pandemic period. One aspect of the model that may bother the reader is the exceedingly high uncertainty at the first observations. This behavior is duo to our approach to the estimation of the variance of the white noise introduced by the noise_block function (see dos Santos et al., 2024 and the associated documentation for details), which can be a bit too sensitive to bad prior specification at the initial steps. As such, we highly recommend the user to perform a sensitivity analysis to choose the initial variance of the white noise: structure <- polynomial_block( rate = 1, order = 2, D = c(0.95, 0.975), name = "Trend" ) + rate = 1, period = 12, D = 0.98, name = "Season" ) + noise_block(rate = 1, R1 = "H", name = "Noise") # Setting the initial variance as a unknown parameter structure <- structure |> intervention(time = 124, var.index = c(1:2), D = 0.005) search.model <- fit_model( structure, outcome, lag = -1, # Using the model likelihood (f(y|M)) as the comparisson metric. H = seq.int(0, 0.1, l = 101) fitted.model <- search.model$model Notice that, this time around, we chose to make an intervention at the beginning of the pandemic, instead of an automated approach. As mentioned before, this approach is preferable in this scenario, since we were aware that the pandemic would affect our outcome before even looking at the data. Fitted DGLM with 1 outcomes. Series.1: Poisson No static coeficients. See the coef.fitted_dlm for the coeficients with temporal dynamic. One-step-ahead prediction Log-likelihood : -1220.102 Interval Score : 7470.8794 Mean Abs. Scaled Error: 0.5993 Again, the new changes improve the comparison metrics even further, leading to the conclusion that our last model is the best among those presented until now. We highly encourage the reader to run this example and experiment with some of the options the kDGLM package offers, but that were not explored, such as changing the discount factors used in each block, the order of the blocks, adding/ removing structural components, etc.. As a last side note, the user may not like the approach of choosing a specific value for the initial variance of the white noise introduced by the noise_block. Indeed, one may wish to define a prior distribution for this parameter and estimate it along with the others. While we will not detail this approach for the sake of brevity (since it is not directly supported), we would like to point out that we do offer tools to facilitate this procedure: search.result <- search.model$search.data[order(search.model$search.data$H), ] H.vals <- search.result$H log.prior <- dgamma(H.vals, 1, 1, log = TRUE) log.like <- search.result$log.like l.fx <- log.prior + log.like pre.fx <- exp(l.fx - max(l.fx)) fx <- pre.fx / sum(pre.fx * (H.vals[2] - H.vals[1])) plot(H.vals, fx, type = "l", xlab = "H", ylab = "Density", main = "Posterior density for the unknown hyperparameter H" Advanced model: Hospital admissions by state For this model, we need the geographic information about Brazil, as such, we will use some auxiliary packages, namely geobr, tidyverse, sf and spdep, although the kDGLM package does not depend on br.base <- read_state( year = 2019, showProgress = FALSE plot.data <- br.base |> gastroBR |> filter(format(Date, "%Y") == "2019") |> select(UF, Population, Admissions) |> group_by(UF) |> Population = max(Population), Admissions = sum(Admissions) ) |> rename(abbrev_state = UF), by = "abbrev_state" (ggplot() + geom_sf(data = plot.data, aes(fill = log10(Admissions / Population))) + scale_fill_distiller(expression(log[10] * "(admissions/population)"), limits = c(-4, -2.5), palette = "RdYlBu", labels = ~ round(., 2) ) + theme_void() + theme(legend.position = "bottom")) Now we proceed to fitting the model itself. Let \(Y_{it}\) be the number of hospital admissions by gastroenteritis at time \(t\) on region \(i\), we will assume the following model: \[ \begin{aligned} Y_{it}|\eta_{it} &\sim Poisson(\eta_{it})\\ \ln\{\eta_{it}\}&= \lambda_{it}=\theta_{1,t}+u_{i,t}+S_{i,t}+\epsilon_{i,t},\\ \theta_{1,t}&= \theta_{1,t-1}+\theta_{2,t-1}+\omega_ {1,t},\\ \theta_{2,t}&= \theta_{2,t-1}+\omega_{2,t},\\ \begin{bmatrix}u_{i,t}\\ v_{i,t}\end{bmatrix} &= R \begin{bmatrix}u_{i,t-1}\\ v_{i,t-1}\end{bmatrix} + \begin{bmatrix} \omega^{u}_{i,t}\\ \omega ^{u}_{i,t}\end{bmatrix},\\ \epsilon_t & \sim \mathcal{N}(0,\sigma_t^2),\\ S_{1,1},...,S_{r,1} & \sim CAR(\tau), \end{aligned} \] where \(r=27\) is the number of areas within our dataset. Currently, the kDGLM package does not offer support for sequential estimation of \(\tau\), the parameter associated with the CAR prior. A study is being developed to address this limitation. For now, we opt to conduct a sensitivity analysis to determine an optimal value for \(tau\) using the tools presented in Subsection Tools for sensitivity analysis. The optimal value found was \(\tau=0.005\). Alternatively, if real-time inference is not a priority for the analyst, a complete posterior for \(\tau\) can be obtained by adapting the code from Subsection Sampling and hyper parameter estimation , without incurring a high computational cost. Notice that we are assuming a very similar model to that which was used in the initial_model, but here we have a common effect (or a global effect) \(\theta_{1,t}\) that equally affects all regions, and a local effect \(S_{i,t}\) that only affects region \(i\) and evolves smoothly though time. Here we chose a vague CAR prior (Banerjee et al., 2014; Schmidt and Nobre, 2018) for \(S_{i,t}\). The proposed model can be fitted using the following code: adj.matrix <- br.base |> poly2nb() |> nb2mat(style = "B") CAR.structure <- polynomial_block(rate = 1, D = 0.9, name = "CAR") |> block_mult(27) |> block_rename(levels(gastroBR$UF)) |> CAR_prior(scale = "Scale", rho = 1, adj.matrix = adj.matrix) shared.structure <- polynomial_block( RO = 1, AC = 1, AM = 1, RR = 1, PA = 1, AP = 1, TO = 1, MA = 1, PI = 1, CE = 1, RN = 1, PB = 1, PE = 1, AL = 1, SE = 1, BA = 1, MG = 1, ES = 1, RJ = 1, SP = 1, PR = 1, SC = 1, RS = 1, MS = 1, MT = 1, GO = 1, DF = 1, order = 2, D = c(0.95, 0.975), name = "Common" ) |> intervention(time = 124, var.index = c(1:2), D = 0.005) base.structure <- (harmonic_block(rate = 1, order = 1, period = 12, D = 0.98, name = "Season") + noise_block(rate = 1, R1 = 0.007, name = "Noise")) |> block_mult(27) |> inputs <- list(shared.structure, CAR.structure, base.structure) for (uf in levels(gastroBR$UF)) { reg.data <- gastroBR |> filter(UF == uf) inputs[[uf]] <- Poisson(lambda = uf, data = reg.data$Admissions, offset = reg.data$Population) inputs$Scale <- 10**seq.int(-5, 1, l = 21) model.search <- do.call(fit_model, inputs) fitted.model <- model.search$model (plot(fitted.model, outcomes = c("MG", "SP", "ES", "RJ", "CE", "BA", "RS", "SC", "AM", "AC"), lag = 1, plot.pkg = "ggplot2") + scale_color_manual("", values = rep("black", 10)) + scale_fill_manual("", values = rep("black", 10)) + facet_wrap(~Serie, ncol = 2, scale = "free_y") + coord_cartesian(ylim = c(NA, NA)) + guides(color = "none", fill = "none") + theme(legend.position = "top")) Scale for colour is already present. Adding another scale for colour, which will replace the existing scale. Scale for fill is already present. Adding another scale for fill, which will replace the existing scale. Coordinate system already present. Adding new coordinate system, which will replace the existing one. smoothed.values <- coef(fitted.model) plot.data <- data.frame() labels <- list( "2010-01-01" = "(a) January, 2010", "2020-03-01" = "(b) March, 2020", "2020-04-01" = "(c) April, 2020", "2022-12-01" = "(d) December, 2022" for (date in c("2010-01-01", "2020-03-01", "2020-04-01", "2022-12-01")) { index <- min(which(gastroBR$Date == date)) plot.data <- rbind( Date = labels[[date]], Effect = smoothed.values$lambda.mean[order(order(levels(reg.data$UF))), index] / log(10), (ggplot() + geom_sf(data = plot.data, aes(fill = Effect)) + facet_wrap(~Date, strip.position = "bottom") + scale_fill_distiller("$\\log_{10}$ rate", limits = c(-6, -3), palette = "RdYlBu", labels = ~ round(., 2) ) + theme_void() + theme(legend.position = "bottom")) labels <- list( "2015-01-01" = "January, 2015", "2019-12-01" = "December, 2019" plot.data <- data.frame() for (date in c("2015-01-01", "2019-12-01")) { index <- min(which(gastroBR$Date == date)) forecast.vals <- coef(fitted.model, lag = 3, t.eval = index, eval.pred = TRUE) mean.pred <- forecast.vals$data$Prediction reg.data <- gastroBR %>% filter(Date == date) %>% mutate(Tx = log10(Admissions / Population)) plot.data <- rbind( Date = labels[[date]], Effect = log10(mean.pred) - log10(reg.data$Population), Label = "Prediction" plot.data <- rbind( Date = labels[[date]], Effect = reg.data$Tx, br.base, Label = "Observed" plot.data$Label <- factor(plot.data$Label, levels = unique(plot.data$Label)) (ggplot() + geom_sf(data = plot.data, aes(fill = Effect)) + facet_wrap(Label ~ Date, strip.position = "bottom") + scale_fill_distiller("log10 rate", limits = c(-5.5, -3.5), palette = "RdYlBu", labels = ~ round(., 2) ) + theme_void() + theme(legend.position = "bottom")) \(27\) regions, with \(156\) observations each, it is not reasonably to show how our model performed for every combination of date and location. We will limit ourselves to show some regions at all times and all regions at some times The reader may use the code provided in this document or in the vignette to fit this model and do a thoroughly examination of the results. Moreover, here we focus only in the usage of the kDGLM package and not in the epidemiological aspect of the results. Finally, about the computational cost, the initial model (that for the total number of hospital admissions over time) took about \(0.11s\) to fit and the advanced model took \(4.24s\), which is within the expected range, since the final model has \(27\) outcomes and \(110\) latent states that, when we consider that they all had temporal dynamic, yields \(17.160\) parameters, from which the joint distribution is obtained.
{"url":"https://cran.dcc.uchile.cl/web/packages/kDGLM/vignettes/example1.html","timestamp":"2024-11-03T22:44:50Z","content_type":"text/html","content_length":"239395","record_id":"<urn:uuid:fd7fb2ec-b7d4-4694-bb1e-6802ec667865>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00653.warc.gz"}
Dimension of graded algebras over a field Lemma 10.117.1. Let $k$ be a field. Let $S$ be a graded $k$-algebra generated over $k$ by finitely many elements of degree $1$. Assume $S_0 = k$. Let $P(T) \in \mathbf{Q}[T]$ be the polynomial such that $\dim (S_ d) = P(d)$ for all $d \gg 0$. See Proposition 10.58.7. Then 1. The irrelevant ideal $S_{+}$ is a maximal ideal $\mathfrak m$. 2. Any minimal prime of $S$ is a homogeneous ideal and is contained in $S_{+} = \mathfrak m$. 3. We have $\dim (S) = \deg (P) + 1 = \dim _ x\mathop{\mathrm{Spec}}(S)$ (with the convention that $\deg (0) = -1$) where $x$ is the point corresponding to the maximal ideal $S_{+} = \mathfrak m$. 4. The Hilbert function of the local ring $R = S_{\mathfrak m}$ is equal to the Hilbert function of $S$. Comments (5) Comment #6300 by PS on Don't you need some condition for $P$ to exist? Comment #6412 by Johan on The polynomial $P$ exists by the reference given. Comment #6448 by PS on The reference has a condition. Comment #6449 by Johan on Oops, my bad! Comment #6458 by Johan on Thanks and fixed here. Post a comment Your email address will not be published. Required fields are marked. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). All contributions are licensed under the GNU Free Documentation License. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 00P5. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 00P5, in case you are confused.
{"url":"https://stacks.math.columbia.edu/tag/00P5","timestamp":"2024-11-10T08:42:04Z","content_type":"text/html","content_length":"17012","record_id":"<urn:uuid:68154819-cd2c-453b-823b-5d4486a0e591>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00084.warc.gz"}
Adaptive mesh refinement (AMR) algorithms Adaptive mesh refinement (AMR) algorithms¶ The basic adaptive refinment strategy used in AMRClaw Description and Detailed Contents is to refine on logically rectangular patches. A single Level 1 grid covers the entire domain (usually — if it is too large it may be split into multiple Level 1 grids). Some rectangular portions of this grid are covered by Level 2 grids refined by some refinement factor R in each direction (anisotropic refinement is now allowed too — see Specifying AMRClaw run-time parameters in setrun.py). Regions of each Level 2 grid may be covered by Level 3 grids, that are further refined (perhaps with a different refinement ratio). And so on. For the hyperbolic solvers in Clawpack the time step is limited by the Courant number (see Section cfl), and so if the spatial resolution is refined by a factor of R in each direction then the time step will generally have to be reduced by a factor R as well. The AMR code thus proceeds as follows: □ In each time step on the Level 1 grid(s), the values in all grid cells (including those covered by finer grids) are advanced one time step. Before this time step is taken, ghost cells around the boundary of the full computational domain are filled based on the boundary conditions specified in the library routine bcNamr.f (where N is the number of space dimensions). Check the Makefile of an application to see where this file can be found. □ After a step on the Level 1 grid, R time steps must be taken on each Level 2 grid, where R denotes the desired refinement ratio in time from Level 1 to Level 2. For each of these time step, ghost cell values must be filled in around all boundaries of each Level 2 grid. This procedure is defined below in Ghost cells and boundary conditions for AMR. □ After taking R steps on Level 2 grids, values on the Level 1 grid are updated to be consistent with the Level 2 grids. Any cell on Level 1 that is covered by a Level 2 grid has its q value replaced by the average of all the Level 2 grid cells lying within this cell. This gives a cell average that should be a better approximation to the true cell average than the original value. □ The updating just described can lead to a change in the total mass calculated on the Level 1 grid. In order to restore global conservation, it is necessary to do a conservation fix up. (To be This style of AMR is often called Berger-Oliger-Colella adaptive refinement, after the papers of Berger and Oliger [BergerOliger84] and [BergerColella89]. The Fortran code in $CLAW/amrclaw is based on code originally written by Marsha Berger for gas dynamics, and merged in Clawpack in the early days of Clawpack development by MJB and RJL. The algorithms used in AMRClaw are described more fully in [BergerLeVeque98]. Ghost cells and boundary conditions for AMR¶ Consider a Level k > 1 grid for which we need ghost cells all around the boundary at the start of each time step on this level. The same procedure is used at other levels. □ Some Level k grids will be adjacent to other Level k grids and so any ghost cell that is equivalent to a Level k cell on some other grid has values copied from this this grid. □ Some ghost cells will be in the interior of the full computational domain but in regions where there is no adjacent Level k grid. There will be a Level k-1 grid covering that region, however. In this case the ghost cells are obtained by space-time interpolation from values on the Level k-1 grid. □ Some ghost cells will lie outside the full computational domain, where the boundary of the Level k grid lies along the boundary of the full domain. For these cells the subroutine bcNamr (where N is the number of space dimensions) is used to fill ghost cell values with the proper user-specified boundary conditions, unless periodic boundary conditions are specified (see For many standard boundary conditions it is not necessary for the user to do anything beyond setting appropriate parameters in setrun.py (see Specifying classic run-time parameters in setrun.py). Only if user-specified boundary conditions are specified is it necessary to modify the library routine bcNamr.f (after copying to your application directory so as not to damage the library version, and modifying the Makefile to point to the new version). There some differences between the bcNamr.f routine and the bcN.f routine used for the single-grid classic Clawpack routines (which are found in $CLAW/classic/src/Nd/bcN.f). In particular, it is necessary to check whether a ghost cell actually lies outside the full computational domain and only set ghost cell values for those that do. It should be clear how to do this from the library version of the routine. If periodic boundary conditions are specified, this is handled by the AMRClaw software along with all internal boundaries, rather than in bcNamr.f. With AMR it is not so easy to apply periodic boundary conditions as it is in the case of a single grid, since it is necessary to determine whether there is a grid at the same refinement level at the opposite side of the domain to copy ghost cell values from, and if so which grid and what index corresponds to the desired location. Choosing and initializing finer grids¶ Every few time steps on the coarsest level it is generally necessary to revise modify the regions of refinement at all levels, for example to follow a propagating shock wave. This is done by 1. Flagging cells that need refinement according to some criteria. 2. Clustering the flagged cells into rectangular patches that will form the new set of grids at the next higher level. 3. Creating the new grids and initializing the values of q and also any aux arrays for each new grid. Clustering is done using and algorithm developed by Berger and Rigoutsis [BergerRigoutsis91] that finds a nonoverlapping set of rectangles that cover all flagged points and balances the following conflicting goals: □ Cover as few points as possible that are not flagged, to reduce the number of grid cells that must be advanced in each time step. □ Create as few new grids as possible, to minimize the overhead associated with filling ghost cells and doing the conservation fix-up around edges of grids. A parameter cutoff can be specified (see Specifying AMRClaw run-time parameters in setrun.py) to control clustering. The algorithm will choose the grids in such a way that at least this fraction of all the grid points in all the new grids will be in cells that were flagged as needing refinement. Usually cutoff = 0.7 is used, so at least 70% of all grid cells in a computation are in regions where they are really needed. Initializing the new grids at Level k+1 is done as follows: □ At points where there was already a Level k+1 grid present, this value is copied over. □ At points where there was not previously a Level k+1 grid, bilinear interpolation is performed based on the Level k grids. Flagging cells for refinement¶ The user can control the criteria used for flagging cells for refinement. See AMR refinement criteria for details.
{"url":"https://www.clawpack.org/dev/amr_algorithm.html","timestamp":"2024-11-08T21:39:36Z","content_type":"text/html","content_length":"18560","record_id":"<urn:uuid:2aeb489d-93f5-41fa-b0c1-eb3c18ad138e>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00651.warc.gz"}
Wittgenstein's Philosophy of Mathematics First published Fri Feb 23, 2007; substantive revision Mon Mar 21, 2011 Ludwig Wittgenstein's Philosophy of Mathematics is undoubtedly the most unknown and under-appreciated part of his philosophical opus. Indeed, more than half of Wittgenstein's writings from 1929 through 1944 are devoted to mathematics, a fact that Wittgenstein himself emphasized in 1944 by writing that his “chief contribution has been in the philosophy of mathematics” (Monk 1990, 466). The core of Wittgenstein's conception of mathematics is very much set by the Tractatus Logico-Philosophicus (1922; hereafter Tractatus), where his main aim is to work out the language-reality connection by determining what is required for language, or language usage, to be about the world. Wittgenstein answers this question, in part, by asserting that the only genuine propositions that we can use to make assertions about reality are contingent (‘empirical’) propositions, which are true if they agree with reality and false otherwise (4.022, 4.25, 4.062, 2.222). From this it follows that all other apparent propositions are pseudo-propositions of various types and that all other uses of ‘true’ and ‘truth’ deviate markedly from the truth-by-correspondence (or agreement) that contingent propositions have in relation to reality. Thus, from the Tractatus to at least 1944, Wittgenstein maintains that “mathematical propositions” are not real propositions and that “mathematical truth” is essentially non-referential and purely syntactical in nature. On Wittgenstein's view, we invent mathematical calculi and we expand mathematics by calculation and proof, and though we learn from a proof that a theorem can be derived from axioms by means of certain rules in a particular way, it is not the case that this proof-path pre-exists our construction of it. As we shall see, Wittgenstein's Philosophy of Mathematics begins in a rudimentary way in the Tractatus, develops into a finitistic constructivism in the middle period (Philosophical Remarks (1929–30) and Philosophical Grammar (1931–33), respectively; hereafter PR and PG, respectively), and is further developed in new and old directions in the MSS used for Remarks on the Foundations of Mathematics (1937–44; hereafter RFM). As Wittgenstein's substantive views on mathematics evolve from 1918 through 1944, his writing and philosophical styles evolve from the assertoric, aphoristic style of the Tractatus to a clearer, argumentative style in the middle period, to a dialectical, interlocutory style in RFM and the Philosophical Investigations (hereafter PI). Wittgenstein's non-referential, formalist conception of mathematical propositions and terms begins in the Tractatus.^[1] Indeed, insofar as he sketches a rudimentary Philosophy of Mathematics in the Tractatus, he does so by contrasting mathematics and mathematical equations with genuine (contingent) propositions, sense, thought, propositional signs and their constituent names, and In the Tractatus, Wittgenstein claims that a genuine proposition, which rests upon conventions, is used by us to assert that a state of affairs (i.e., an elementary or atomic fact; ‘Sachverhalt’) or fact (i.e., multiple states of affairs; ‘Tatsache’) obtain(s) in the one and only real world. An elementary proposition is isomorphic to the possible state of affairs it is used to represent: it must contain as many names as there are objects in the possible state of affairs. An elementary proposition is true iff its possible state of affairs (i.e., its ‘sense’; ‘Sinn’) obtains. Wittgenstein clearly states this Correspondence Theory of Truth at (4.25): “If an elementary proposition is true, the state of affairs exists; if an elementary proposition is false, the state of affairs does not exist.” But propositions and their linguistic components are, in and of themselves, dead—a proposition only has sense because we human beings have endowed it with a conventional sense (5.473). Moreover, propositional signs may be used to do any number of things (e.g., insult, catch someone's attention); in order to assert that a state of affairs obtains, a person must ‘project’ the proposition's sense—its possible state of affairs—by ‘thinking’ of (e.g., picturing) its sense as one speaks, writes or thinks the proposition (3.11). Wittgenstein connects use, sense, correspondence , and truth by saying that “a proposition is true if we use it to say that things stand in a certain way, and they do” (4.062; italics added). The Tractarian conceptions of genuine (contingent) propositions and the (original and) core concept of truth are used to construct theories of logical and mathematical ‘propositions’ by contrast. Stated boldly and bluntly, tautologies, contradictions and mathematical propositions (i.e., mathematical equations) are neither true nor false—we say that they are true or false, but in doing so we use the words ‘true’ and ‘false’ in very different senses from the sense in which a contingent proposition is true or false. Unlike genuine propositions, tautologies and contradictions “have no ‘subject-matter’” (6.124), “lack sense,” and “say nothing” about the world (4.461), and, analogously, mathematical equations are “pseudo-propositions” (6.2) which, when ‘true’ (‘correct’; ‘richtig’ (6.2321)), “merely mark[…]… [the] equivalence of meaning [of ‘two expressions’]” (6.2323). Given that “[t]autology and contradiction are the limiting cases—indeed the disintegration—of the combination of signs” (4.466; italics added), where “the conditions of agreement with the world—the representational relations—cancel one another, so that [they] do[] not stand in any representational relation to reality,” tautologies and contradictions do not picture reality or possible states of affairs and possible facts (4.462). Stated differently, tautologies and contradictions do not have sense, which means we cannot use them to make assertions, which means, in turn, that they cannot be either true or false. Analogously, mathematical pseudo-propositions are equations, which indicate or show that two expressions are equivalent in meaning and therefore are intersubstitutable. Indeed, we arrive at mathematical equations by “the method of substitution”: “starting from a number of equations, we advance to new equations by substituting different expressions in accordance with the equations” (6.24). We prove mathematical ‘propositions’ ‘true’ (‘correct’) by ‘seeing’ that two expressions have the same meaning, which “must be manifest in the two expressions themselves” (6.23), and by substituting one expression for another with the same meaning. Just as “one can recognize that [“logical propositions”] are true from the symbol alone” (6.113), “the possibility of proving” mathematical propositions means that we can perceive their correctness without having to compare “what they express” with facts (6.2321; cf. (RFM App. III, §4)). The demarcation between contingent propositions, which can be used to correctly or incorrectly represent parts of the world, and mathematical propositions, which can be decided in a purely formal, syntactical manner, is maintained by Wittgenstein until his death in 1951 (Zettel §701, 1947; PI II, 2001 Ed., pp. 192–193e, 1949). Given linguistic and symbolic conventions, the truth-value of a contingent proposition is entirely a function of how the world is, whereas the “truth-value” of a mathematical proposition is entirely a function of its constituent symbols and the formal system of which it is a part. Thus, a second, closely related way of stating this demarcation is to say that mathematical propositions are decidable by purely formal means (e.g., calculations), while contingent propositions, being about the ‘external’ world, can only be decided, if at all, by determining whether or not a particular fact obtains (i.e., something external to the proposition and the language in which it resides) (2.223; 4.05). The Tractarian formal theory of mathematics is, specifically, a theory of formal operations. Over the past 10 years, Wittgenstein's theory of operations has received considerable examination [(Frascolla 1994; 1997), (Marion 1998), (Potter 2000), and (Floyd 2002)], which has interestingly connected it and the Tractarian equational theory of arithmetic with elements of Alonzo Church's λ-calculus and with R. L. Goodstein's equational calculus (Marion 1998, Chapters 1, 2, and 4). Very briefly stated, Wittgenstein presents: a. … the sign ‘[a, x, O’x]’ for the general term of the series of forms a, O’a, O’O’a, …. (5.2522) b. … the general form of an operation Ω’(η) [as] [ξ, N(ξ)]’(η) (= [η, ξ, N(ξ)]). (6.01) c. … the general form of a proposition (“truth-function”) [as] [p, ξ, N(ξ)]. (6) d. The general form of an integer [natural number] [as] [0, ξ, ξ + 1]. (6.03) adding that “[t]he concept of number is… the general form of a number” (6.022). As Frascolla (and Marion after him) have pointed out, “the general form of a proposition is a particular case of the general form of an ‘operation’” (Marion 1998, p. 21), and all three general forms (i.e., of operation, proposition, and natural number) are modeled on the variable presented at (5.2522) (Marion 1998, p. 22). Defining “[a]n operation [as] the expression of a relation between the structures of its result and of its bases” (5.22), Wittgenstein states that whereas “[a] function cannot be its own argument,… an operation can take one of its own results as its base” (5.251). On Wittgenstein's (5.2522) account of ‘[a, x, O’x]’, “the first term of the bracketed expression is the beginning of the series of forms, the second is the form of a term x arbitrarily selected from the series, and the third [O’x] is the form of the term that immediately follows x in the series.” Given that “[t]he concept of successive applications of an operation is equivalent to the concept ‘and so on’” (5.2523), one can see how the natural numbers can be generated by repeated iterations of the general form of a natural number, namely ‘[0, ξ, ξ +1]’. Similarly, truth-functional propositions can be generated, as Russell says in the Introduction to the Tractatus (p. xv), from the general form of a proposition ‘[p, ξ, N(ξ)]’ by “taking any selection of atomic propositions [where p “stands for all atomic propositions”; “the bar over the variable indicates that it is the representative of all its values” (5.501)], negating them all, then taking any selection of the set of propositions now obtained, together with any of the originals [where x “stands for any set of propositions”]—and so on indefinitely.” On Frascolla's (1994, 3ff) account, “a numerical identity “t = s” is an arithmetical theorem if and only if the corresponding equation “Ω^t’x = Ω^s’x”, which is framed in the language of the general theory of logical operations, can be proven.” By proving ‘the equation “Ω^2×2’x = Ω^4’x”, which translates the arithmetic identity “2 × 2 = 4” into the operational language’ (6.241), Wittgenstein thereby outlines “a translation of numerical arithmetic into a sort of general theory of operations” (Frascolla 1998, 135). Despite the fact that Wittgenstein clearly does not attempt to reduce mathematics to logic in either Russell's manner or Frege's manner, or to tautologies, and despite the fact that Wittgenstein criticizes Russell's Logicism (e.g., the Theory of Types, 3.31–3.32; the Axiom of Reducibility, 6.1232, etc.) and Frege's Logicism (6.031, 4.1272, etc.),^[2] quite a number of commentators, early and recent, have interpreted Wittgenstein's Tractarian theory of mathematics as a variant of Logicism [(Quine 1940 [1981, 55]), (Benacerraf and Putnam 1964, 14), (Black 1966, 340), (Savitt 1979 [1986], 34), (Frascolla 1994, 37; 1997, 354, 356–57, 361; 1998, 133), (Marion 1998, 26 & 29), and (Potter 2000, 164 and 182–183)]. There are at least four reasons proffered for this interpretation. 1. Wittgenstein says that “[m]athematics is a method of logic” (6.234). 2. Wittgenstein says that “[t]he logic of the world, which is shown in tautologies by the propositions of logic, is shown in equations by mathematics” (6.22). 3. According to Wittgenstein, we ascertain the truth of both mathematical and logical propositions by the symbol alone (i.e., by purely formal operations), without making any (‘external,’ non-symbolic) observations of states of affairs or facts in the world. 4. Wittgenstein's iterative (inductive) “interpretation of numerals as exponents of an operation variable” is a “reduction of arithmetic to operation theory,” where “operation” is construed as a “ logical operation” (italics added) (Frascolla 1994, 37), which shows that ‘the label “no-classes logicism” tallies with the Tractatus view of arithmetic’ (Frascolla 1998, 133; 1997, 354). Though at least three Logicist interpretations of the Tractatus have appeared within the last 8 years, the following considerations [(Rodych 1995), (Wrigley 1998)] indicate that none of these reasons is particularly cogent. For example, in saying that “[m]athematics is a method of logic” perhaps Wittgenstein is only saying that since the general form of a natural number and the general form of a proposition are both instances of the general form of a (purely formal) operation, just as truth-functional propositions can be constructed using the general form of a proposition, (true) mathematical equations can be constructed using the general form of a natural number. Alternatively, Wittgenstein may mean that mathematical inferences (i.e., not substitutions) are in accord with, or make use of, logical inferences, and insofar as mathematical reasoning is logical reasoning, mathematics is a method of logic. Similarly, in saying that “[t]he logic of the world” is shown by tautologies and true mathematical equations (i.e., #2), Wittgenstein may be saying that since mathematics was invented to help us count and measure, insofar as it enables us to infer contingent proposition(s) from contingent proposition(s) (see 6.211 below), it thereby reflects contingent facts and “[t]he logic of the world.” Though logic—which is inherent in natural (‘everyday’) language (4.002, 4.003, 6.124) and which has evolved to meet our communicative, exploratory, and survival needs—is not invented in the same way, a valid logical inference captures the relationship between possible facts and a sound logical inference captures the relationship between existent facts. As regards #3, Black, Savitt, and Frascolla have argued that, since we ascertain the truth of tautologies and mathematical equations without any appeal to “states of affairs” or “facts,” true mathematical equations and tautologies are so analogous that we can “aptly” describe “the philosophy of arithmetic of the Tractatus… as a kind of logicism” (Frascolla, 1994, 37). The rejoinder to this is that the similarity that Frascolla, Black and Savitt recognize does not make Wittgenstein's theory a “kind of logicism” in Frege's or Russell's sense, because Wittgenstein does not define numbers “logically” in either Frege's way or Russell's way, and the similarity (or analogy) between tautologies and true mathematical equations is neither an identity nor a relation of reducibility. Finally, critics argue that the problem with #4 is that there is no evidence for the claim that the relevant operation is logical in Wittgenstein's or Russell's or Frege's sense of the term—it seems a purely formal, syntactical operation (Rodych 1995). “Logical operations are performed with propositions, arithmetical ones with numbers,” says Wittgenstein (WVC 218); “[t]he result of a logical operation is a proposition, the result of an arithmetical one is a number.” In sum, critics of the Logicist interpretation of the Tractatus argue that ##1–4 do not individually or collectively constitute cogent grounds for a Logicist interpretation of the Tractatus. Another crucial aspect of the Tractarian theory of mathematics is captured in (6.211). Indeed in real life a mathematical proposition is never what we want. Rather, we make use of mathematical propositions only in inferences from propositions that do not belong to mathematics to others that likewise do not belong to mathematics. (In philosophy the question, ‘What do we actually use this word or this proposition for?’ repeatedly leads to valuable insights.) Though mathematics and mathematical activity are purely formal and syntactical, in the Tractatus Wittgenstein tacitly distinguishes between purely formal games with signs, which have no application in contingent propositions, and mathematical propositions, which are used to make inferences from contingent proposition(s) to contingent proposition(s). Wittgenstein does not explicitly say, however, how mathematical equations, which are not genuine propositions, are used in inferences from genuine proposition(s) to genuine proposition(s) [(Floyd 2002, 309), (Kremer 2002, 293–94)]. As we shall see in §3.5, the later Wittgenstein returns to the importance of extra-mathematical application and uses it to distinguish a mere “sign-game” from a genuine, mathematical language-game. This, in brief, is Wittgenstein's Tractarian theory of mathematics. In the Introduction to the Tractatus, Russell wrote that Wittgenstein's “theory of number” “stands in need of greater technical development,” primarily because Wittgenstein had not shown how it could deal with transfinite numbers (Wittgenstein 1922, xx). Similarly, in his review of the Tractatus, Frank Ramsey wrote that Wittgenstein's ‘account’ does not cover all of mathematics partly because Wittgenstein's theory of equations cannot explain inequalities (Ramsey 1923, 475). Though it is doubtful that, in 1923, Wittgenstein would have thought these issues problematic, it certainly is true that the Tractarian theory of mathematics is essentially a sketch, especially in comparison with what Wittgenstein begins to develop six years later. After the completion of the Tractatus in 1918, Wittgenstein did virtually no philosophical work until February 2, 1929, eleven months after attending a lecture by the Dutch mathematician L.E.J. There is little doubt that Wittgenstein was invigorated by L.E.J. Brouwer's March 10, 1928 Vienna lecture “Science, Mathematics, and Language” (Brouwer 1929), which he attended with F. Waismann and H. Feigl, but it is a gross overstatement to say that he returned to Philosophy because of this lecture or that his intermediate interest in the Philosophy of Mathematics issued primarily from Brouwer's influence. In fact, Wittgenstein's return to Philosophy and his intermediate work on mathematics is also due to conversations with Ramsey and members of the Vienna circle, to Wittgenstein's disagreement with Ramsey over identity, and several other factors. Though Wittgenstein seems not to have read any Hilbert or Brouwer prior to the completion of the Tractatus, by early 1929 Wittgenstein had certainly read work by Brouwer, Weyl, Skolem, Ramsey (and possibly Hilbert) and, apparently, he had had one or more private discussions with Brouwer in 1928 [(Le Roy Finch 1977, 260), (Van Dalen 2005, 566–567)]. Thus, the rudimentary treatment of mathematics in the Tractatus, whose principal influences were Russell and Frege, was succeeded by detailed work on mathematics in the middle period (1929–1933), which was strongly influenced by the 1920s work of Brouwer, Weyl, Hilbert, and Skolem. To best understand Wittgenstein's intermediate Philosophy of Mathematics, one must fully appreciate his strong variant of formalism, according to which “[w]e make mathematics” (WVC 34, Ft. #1; PR §159) by inventing purely formal mathematical calculi, with ‘stipulated’ axioms (PR §202), syntactical rules of transformation, and decision procedures that enable us to invent “mathematical truth” and “mathematical falsity” by algorithmically deciding so-called mathematical ‘propositions’ (PR §§122, 162). The core idea of Wittgenstein's formalism from 1929 (if not 1918) through 1944 is that mathematics is essentially syntactical, devoid of reference and semantics. The most obvious aspect of this view, which has been noted by numerous commentators who do not refer to Wittgenstein as a ‘formalist’ [(Kielkopf 1970, 360–38), (Klenk 1976, 5, 8, 9), (Fogelin 1968, 267), (Frascolla 1994, 40), (Marion 1998, 13–14)], is that, contra Platonism, the signs and propositions of a mathematical calculus do not refer to anything. As Wittgenstein says at (WVC 34, Ft. #1), “[n]umbers are not represented by proxies; numbers are there.” This means not only that numbers are there in the use, it means that the numerals are the numbers, for “[a]rithmetic doesn't talk about numbers, it works with numbers” ( PR §109). What arithmetic is concerned with is the schema | | | |.—But does arithmetic talk about the lines I draw with pencil on paper?—Arithmetic doesn't talk about the lines, it operates with them. (PG In a similar vein, Wittgenstein says that (WVC 106) “mathematics is always a machine, a calculus” and “[a] calculus is an abacus, a calculator, a calculating machine,” which “works by means of strokes, numerals, etc.” The “justified side of formalism,” according to Wittgenstein (WVC 105), is that mathematical symbols “lack a meaning” (i.e., ‘Bedeutung’)—they do not “go proxy for” things which are “their meaning[s].” You could say arithmetic is a kind of geometry; i.e. what in geometry are constructions on paper, in arithmetic are calculations (on paper).—You could say it is a more general kind of geometry. ( PR §109; PR §111) This is the core of Wittgenstein's life-long formalism. When we prove a theorem or decide a proposition, we operate in a purely formal, syntactical manner. In doing mathematics, we do not discover pre-existing truths that were “already there without one knowing” (PG 481)—we invent mathematics, bit-by-little-bit. “If you want to know what 2 + 2 = 4 means,” says Wittgenstein, “you have to ask how we work it out,” because “we consider the process of calculation as the essential thing” (PG 333). Hence, the only meaning (i.e., sense) that a mathematical proposition has is intra-systemic meaning, which is wholly determined by its syntactical relations to other propositions of the calculus. A second important aspect of the intermediate Wittgenstein's strong formalism is his view that extra-mathematical application (and/or reference) is not a necessary condition of a mathematical calculus. Mathematical calculi do not require extra-mathematical applications, Wittgenstein argues, since we “can develop arithmetic completely autonomously and its application takes care of itself since wherever it's applicable we may also apply it” (PR §109; cf. PG 308, WVC 104). As we shall shortly see, the middle Wittgenstein is also drawn to strong formalism by a new concern with questions of decidability. Undoubtedly influenced by the writings of Brouwer and David Hilbert, Wittgenstein uses strong formalism to forge a new connection between mathematical meaningfulness and algorithmic decidability. An equation is a rule of syntax. Doesn't that explain why we cannot have questions in mathematics that are in principle unanswerable? For if the rules of syntax cannot be grasped, they’re of no use at all…. [This] makes intelligible the attempts of the formalist to see mathematics as a game with signs. (PR §121) In Section 2.3, we shall see how Wittgenstein goes beyond both Hilbert and Brouwer by maintaining the Law of the Excluded Middle in a way that restricts mathematical propositions to expressions that are algorithmically decidable. The single most important difference between the Early and Middle Wittgenstein is that, in the middle period, Wittgenstein rejects quantification over an infinite mathematical domain, stating that, contra his Tractarian view, such ‘propositions’ are not infinite conjunctions and infinite disjunctions simply because there are no such things. Wittgenstein's principal reasons for developing a finitistic Philosophy of Mathematics are as follows. 1. Mathematics as Human Invention: According to the middle Wittgenstein, we invent mathematics, from which it follows that mathematics and so-called mathematical objects do not exist independently of our inventions. Whatever is mathematical is fundamentally a product of human activity. 2. Mathematical Calculi Consist Exclusively of Intensions and Extensions: Given that we have invented only mathematical extensions (e.g., symbols, finite sets, finite sequences, propositions, axioms) and mathematical intensions (e.g., rules of inference and transformation, irrational numbers as rules), these extensions and intensions, and the calculi in which they reside, constitute the entirety of mathematics. (It should be noted that Wittgenstein's usage of ‘extension’ and ‘intension’ as regards mathematics differs markedly from standard contemporary usage, wherein the extension of a predicate is the set of entities that satisfy the predicate and the intension of a predicate is the meaning of, or expressed by, the predicate. Put succinctly, Wittgenstein thinks that the extension of this notion of concept-and-extension from the domain of existent (i.e., physical) objects to the so-called domain of “mathematical objects” is based on a faulty analogy and engenders conceptual confusion. See #1 just below.) These two reasons have at least five immediate consequences for Wittgenstein's Philosophy of Mathematics. 1. Rejection of Infinite Mathematical Extensions: Given that a mathematical extension is a symbol (‘sign’) or a finite concatenation of symbols extended in space, there is a categorical difference between mathematical intensions and (finite) mathematical extensions, from which it follows that “the mathematical infinite” resides only in recursive rules (i.e., intensions). An infinite mathematical extension (i.e., a completed, infinite mathematical extension) is a contradiction-in-terms 2. Rejection of Unbounded Quantification in Mathematics: Given that the mathematical infinite can only be a recursive rule, and given that a mathematical proposition must have sense, it follows that there cannot be an infinite mathematical proposition (i.e., an infinite logical product or an infinite logical sum). 3. Algorithmic Decidability vs. Undecidability: If mathematical extensions of all kinds are necessarily finite, then, in principle, all mathematical propositions are algorithmically decidable, from which it follows that an “undecidable mathematical proposition” is a contradiction-in-terms. Moreover, since mathematics is essentially what we have and what we know, Wittgenstein restricts algorithmic decidability to knowing how to decide a proposition with a known decision procedure. 4. Anti-Foundationalist Account of Real Numbers: Since there are no infinite mathematical extensions, irrational numbers are rules, not extensions. Given that an infinite set is a recursive rule (or an induction) and no such rule can generate all of the things mathematicians call (or want to call) “real numbers,” it follows that there is no set of ‘all’ the real numbers and no such thing as the mathematical continuum. 5. Rejection of Different Infinite Cardinalities: Given the non-existence of infinite mathematical extensions, Wittgenstein rejects the standard interpretation of Cantor's diagonal proof as a proof of infinite sets of greater and lesser cardinalities. Since we invent mathematics in its entirety, we do not discover pre-existing mathematical objects or facts or that mathematical objects have certain properties, for “one cannot discover any connection between parts of mathematics or logic that was already there without one knowing” (PG 481). In examining mathematics as a purely human invention, Wittgenstein tries to determine what exactly we have invented and why exactly, in his opinion, we erroneously think that there are infinite mathematical extensions. If, first, we examine what we have invented, we see that we have invented formal calculi consisting of finite extensions and intensional rules. If, more importantly, we endeavour to determine why we believe that infinite mathematical extensions exist (e.g., why we believe that the actual infinite is intrinsic to mathematics), we find that we conflate mathematical intensions and mathematical extensions, erroneously thinking that there is “a dualism” of “the law and the infinite series obeying it” (PR §180). For instance, we think that because a real number “endlessly yields the places of a decimal fraction” (PR §186), it is “a totality” (WVC 81–82, Ft. #1), when, in reality, “[a]n irrational number isn't the extension of an infinite decimal fraction,… it's a law” (PR §181) which “yields extensions” (PR §186). A law and a list are fundamentally different; neither can ‘give’ what the other gives (WVC 102–103). Indeed, “the mistake in the set-theoretical approach consists time and again in treating laws and enumerations (lists) as essentially the same kind of thing” (PG 461). Closely related with this conflation of intensions and extensions is the fact that we mistakenly act as if the word ‘infinite’ is a “number word,” because in ordinary discourse we answer the question “how many?” with both (PG 463; cf. PR §142). But “‘[i]nfinite’ is not a quantity,” Wittgenstein insists (WVC 228); the word ‘infinite’ and a number word like ‘five’ do not have the same syntax. The words ‘finite’ and ‘infinite’ do not function as adjectives on the words ‘class’ or ‘set,’ (WVC 102), for the terms “finite class” and “infinite class” use ‘class’ in completely different ways (WVC 228). An infinite class is a recursive rule or “an induction,” whereas the symbol for a finite class is a list or extension (PG 461). It is because an induction has much in common with the multiplicity of a finite class that we erroneously call it an infinite class (PR §158). In sum, because a mathematical extension is necessarily a finite sequence of symbols, an infinite mathematical extension is a contradiction-in-terms. This is the foundation of Wittgenstein's finitism. Thus, when we say, e.g., that “there are infinitely many even numbers,” we are not saying “there are an infinite number of even numbers” in the same sense as we can say “there are 27 people in this house”; the infinite series of natural numbers is nothing but “the infinite possibility of finite series of numbers”—“[i]t is senseless to speak of the whole infinite number series, as if it, too, were an extension” (PR §144). The infinite is understood rightly when it is understood, not as a quantity, but as an “infinite possibility” (PR §138). Given Wittgenstein's rejection of infinite mathematical extensions, he adopts finitistic, constructive views on mathematical quantification, mathematical decidability, the nature of real numbers, and Cantor's diagonal proof of the existence of infinite sets of greater cardinalities. Since a mathematical set is a finite extension, we cannot meaningfully quantify over an infinite mathematical domain, simply because there is no such thing as an infinite mathematical domain (i.e., totality, set), and, derivatively, no such things as infinite conjunctions or disjunctions [(Moore 1955, 2–3); cf. (AWL 6) and (PG 281)]. [I]t still looks now as if the quantifiers make no sense for numbers. I mean: you can't say ‘(n) φn’, precisely because ‘all natural numbers’ isn't a bounded concept. But then neither should one say a general proposition follows from a proposition about the nature of number. But in that case it seems to me that we can't use generality—all, etc.—in mathematics at all. There's no such thing as ‘all numbers’, simply because there are infinitely many. (PR §126; PR §129) ‘Extensionalists’ who assert that “ε(0).ε(1).ε(2) and so on” is an infinite logical product (PG 452) assume or assert that finite and infinite conjunctions are close cousins—that the fact that we cannot write down or enumerate all of the conjuncts ‘contained’ in an infinite conjunction is only a “human weakness,” for God could surely do so and God could surely survey such a conjunction in a single glance and determine its truth-value. According to Wittgenstein, however, this is not a matter of human limitation. Because we mistakenly think that “an infinite conjunction” is similar to “an enormous conjunction,” we erroneously reason that just as we cannot determine the truth-value of an enormous conjunction because we don't have enough time, we similarly cannot, due to human limitations, determine the truth-value of an infinite conjunction (or disjunction). But the difference here is not one of degree but of kind: “in the sense in which it is impossible to check an infinite number of propositions it is also impossible to try to do so” (PG 452). This applies, according to Wittgenstein, to human beings, but more importantly, it applies also to God (i.e., an omniscient being), for even God cannot write down or survey infinitely many propositions because for him too the series is never-ending or limitless and hence the ‘task’ is not a genuine task because it cannot, in principle, be done (i.e., “infinitely many” is not a number word). As Wittgenstein says at (PR 128; cf. PG 479): “‘Can God know all the places of the expansion of π?’ would have been a good question for the schoolmen to ask,” for the question is strictly ‘senseless.’ As we shall shortly see, on Wittgenstein's account, “[a] statement about all numbers is not represented by means of a proposition, but by means of induction” (WVC 82). Similarly, there is no such thing as a mathematical proposition about some number—no such thing as a mathematical proposition that existentially quantifies over an infinite domain (PR §173). What is the meaning of such a mathematical proposition as ‘(∃n) 4 + n = 7’? It might be a disjunction — (4 + 0 = 7) ∨ (4 + 1 = 7) ∨ etc. ad inf. But what does that mean? I can understand a proposition with a beginning and an end. But can one also understand a proposition with no end? (PR §127) We are particularly seduced by the feeling or belief that an infinite mathematical disjunction makes good sense in the case where we can provide a recursive rule for generating each next member of an infinite sequence. For example, when we say “There exists an odd perfect number” we are asserting that, in the infinite sequence of odd numbers, there is (at least) one odd number that is perfect—we are asserting ‘φ(1) ∨ φ(3) ∨ φ(5) ∨ and so on’ and we know what would make it true and what would make it false (PG 451). The mistake here made, according to Wittgenstein (PG 451), is that we are implicitly ‘comparing the proposition “(∃n)…” with the proposition… “There are two foreign words on this page”,’ which doesn't provide the grammar of the former ‘proposition,’ but only indicates an analogy in their respective rules. On Wittgenstein's intermediate finitism, an expression quantifying over an infinite domain is never a meaningful proposition, not even when we have proved, for instance, that a particular number n has a particular property. The important point is that, even in the case where I am given that 3^2 + 4^2 = 5^2, I ought not to say ‘(∃x, y, z, n) (x^n + y^n = z^n), since taken extensionally that's meaningless, and taken intensionally this doesn't provide a proof of it. No, in this case I ought to express only the first equation. (PR §150) Thus, Wittgenstein adopts the radical position that all expressions that quantify over an infinite domain, whether ‘conjectures’ (e.g., Goldbach's Conjecture, the Twin Prime Conjecture) or “proved general theorems” (e.g., “Euclid's Prime Number Theorem,” the Fundamental Theorem of Algebra), are meaningless (i.e., ‘senseless’; ‘sinnlos’) expressions as opposed to “genuine mathematical proposition[s]” (PR §168). These expressions are not (meaningful) mathematical propositions, according to Wittgenstein, because the Law of the Excluded Middle does not apply, which means that “we aren't dealing with propositions of mathematics” (PR §151). The crucial question why and in exactly what sense the Law of the Excluded Middle does not apply to such expressions will be answered in the next section. The middle Wittgenstein has other grounds for rejecting unrestricted quantification in mathematics, for on his idiosyncratic account, we must distinguish between four categories of concatenations of mathematical symbols. 1. Proved mathematical propositions in a particular mathematical calculus (no need for “mathematical truth”). 2. Refuted mathematical propositions in (or of) a particular mathematical calculus (no need for “mathematical falsity”). 3. Mathematical propositions for which we know we have in hand an applicable and effective decision procedure (i.e., we know how to decide them). 4. Concatenations of symbols that are not part of any mathematical calculus and which, for that reason, are not mathematical propositions (i.e., are non-propositions). In his (van Atten 2004, 18), Mark van Atten says that “[i]ntuitionistically, there are four [“possibilities for a proposition with respect to truth”]: 1. p has been experienced as true 2. p has been experienced as false 3. Neither 1 nor 2 has occurred yet, but we know a procedure to decide p (i.e., a procedure that will prove p or prove ¬p) 4. Neither 1 nor 2 has occurred yet, and we do not know a procedure to decide p.” What is immediately striking about Wittgenstein's ##1–3 and Brouwer's ##1–3 [(Brouwer 1955, 114), (Brouwer 1981, 92)] is the enormous similarity. And yet, for all of the agreement, the disagreement in #4 is absolutely crucial. As radical as the respective #3s are, Brouwer and Wittgenstein agree that an undecided φ is a mathematical proposition (for Wittgenstein, of a particular mathematical calculus) if we know of an applicable decision procedure. They also agree that until φ is decided, it is neither true nor false (though, for Wittgenstein, ‘true’ means no more than “proved in calculus Γ”). What they disagree about is the status of an ordinary mathematical conjecture, such as Goldbach's Conjecture. Brouwer admits it as a mathematical proposition, while Wittgenstein rejects it because we do not know how to algorithmically decide it. Like Brouwer (1948 [1983, 90]), Wittgenstein holds that there are no “unknown truth[s]” in mathematics, but unlike Brouwer he denies the existence of “undecidable propositions” on the grounds that such a ‘proposition’ would have no ‘sense,’ “and the consequence of this is precisely that the propositions of logic lose their validity for it” (PR §173). In particular, if there are undecidable mathematical propositions (as Brouwer maintains), then at least some mathematical propositions are not propositions of any existent mathematical calculus. For Wittgenstein, however, it is a defining feature of a mathematical proposition that it is either decided or decidable by a known decision procedure in a mathematical calculus. As Wittgenstein says at (PR §151), “where the law of the excluded middle doesn't apply, no other law of logic applies either, because in that case we aren't dealing with propositions of mathematics. (Against Weyl and Brouwer).” The point here is not that we need truth and falsity in mathematics—we don't—but rather that every mathematical proposition (including ones for which an applicable decision procedure is known) is known to be part of a mathematical calculus. To maintain this position, Wittgenstein distinguishes between (meaningful, genuine) mathematical propositions, which have mathematical sense, and meaningless, senseless (‘sinnlos’) expressions by stipulating that an expression is a meaningful (genuine) proposition of a mathematical calculus iff we know of a proof, a refutation, or an applicable decision procedure [(PR §151), (PG 452), (PG 366), (AWL 199–200)]. “Only where there's a method of solution [a “logical method for finding a solution”] is there a [mathematical] problem,” he tells us (PR §§149, 152; PG 393). “We may only put a question in mathematics (or make a conjecture),” he adds (PR §151), “where the answer runs: ‘I must work it out’.” At (PG 468), Wittgenstein emphasizes the importance of algorithmic decidability clearly and emphatically: “In mathematics everything is algorithm and nothing is meaning [‘Bedeutung’]; even when it doesn't look like that because we seem to be using words to talk about mathematical things. Even these words are used to construct an algorithm.” When, therefore, Wittgenstein says (PG 368) that if “ [the Law of the Excluded Middle] is supposed not to hold, we have altered the concept of proposition,” he means that an expression is only a meaningful mathematical proposition if we know of an applicable decision procedure for deciding it (PG 400). If a genuine mathematical proposition is undecided, the Law of the Excluded Middle holds in the sense that we know that we will prove or refute the proposition by applying an applicable decision procedure (PG 379, 387). For Wittgenstein, there simply is no distinction between syntax and semantics in mathematics: everything is syntax. If we wish to demarcate between “mathematical propositions” versus “mathematical pseudo-propositions,” as we do, then the only way to ensure that there is no such thing as a meaningful, but undecidable (e.g., independent), proposition of a given calculus is to stipulate that an expression is only a meaningful proposition in a given calculus (PR §153) if either it has been decided or we know of an applicable decision procedure. In this manner, Wittgenstein defines both a mathematical calculus and a mathematical proposition in epistemic terms. A calculus is defined in terms of stipulations [(PR §202), (PG 369)], known rules of operation, and known decision procedures, and an expression is only a mathematical proposition in a given calculus (PR §155), and only if that calculus contains (PG 379) a known (and applicable) decision procedure, for “you cannot have a logical plan of search for a sense you don't know” (PR §148). Thus, the middle Wittgenstein rejects undecidable mathematical propositions on two grounds. First, number-theoretic expressions that quantify over an infinite domain are not algorithmically decidable, and hence are not meaningful mathematical propositions. If someone says (as Brouwer does) that for (x) f[1]x = f[2]x, there is, as well as yes and no, also the case of undecidability, this implies that ‘(x)…’ is meant extensionally and we may talk of the case in which all x happen to have a property. In truth, however, it's impossible to talk of such a case at all and the ‘(x)…’ in arithmetic cannot be taken extensionally. (PR §174) “Undecidability,” says Wittgenstein (PR §174) “presupposes… that the bridge cannot be made with symbols,” when, in fact, “[a] connection between symbols which exists but cannot be represented by symbolic transformations is a thought that cannot be thought,” for “[i]f the connection is there,… it must be possible to see it.” Alluding to algorithmic decidability, Wittgenstein stresses (PR §174) that “[w]e can assert anything which can be checked in practice,” because “it's a question of the possibility of checking” [italics added]. Wittgenstein's second reason for rejecting an undecidable mathematical proposition is that it is a contradiction-in-terms. There cannot be “undecidable propositions,” Wittgenstein argues (PR §173), because an expression that is not decidable in some actual calculus is simply not a mathematical proposition, since “every proposition in mathematics must belong to a calculus of mathematics” (PG This radical position on decidability results in various radical and counter-intuitive statements about unrestricted mathematical quantification, mathematical induction, and, especially, the sense of a newly proved mathematical proposition. In particular, Wittgenstein asserts that uncontroversial mathematical conjectures, such as Goldbach's Conjecture (hereafter ‘GC’) and the erstwhile conjecture “Fermat's Last Theorem” (hereafter ‘FLT’), have no sense (or, perhaps, no determinate sense) and that the unsystematic proof of such a conjecture gives it a sense that it didn't previously have (PG 374) because “it's unintelligible that I should admit, when I've got the proof, that it's a proof of precisely this proposition, or of the induction meant by this proposition” (PR §155). Thus Fermat's [Last Theorem] makes no sense until I can search for a solution to the equation in cardinal numbers. And ‘search’ must always mean: search systematically. Meandering about in infinite space on the look-out for a gold ring is no kind of search. (PR §150) I say: the so-called ‘Fermat's Last Theorem’ isn't a proposition. (Not even in the sense of a proposition of arithmetic.) Rather, it corresponds to an induction. (PR §189) To see how Fermat's Last Theorem isn't a proposition and how it might correspond to an induction, we need to examine Wittgenstein's account of mathematical induction. Given that one cannot quantify over an infinite mathematical domain, the question arises: What, if anything, does any number-theoretic proof by mathematical induction actually prove? On the standard view, a proof by mathematical induction has the following paradigmatic form. Inductive Base: φ(1) Inductive Step: ∀n(φ(n) → φ(n + 1)) Conclusion: ∀nφ(n) If, however, “∀nφ(n)” is not a meaningful (genuine) mathematical proposition, what are we to make of this proof? Wittgenstein's initial answer to this question is decidedly enigmatic. “An induction is the expression for arithmetical generality,” but “induction isn't itself a proposition” (PR §129). We are not saying that when f(1) holds and when f(c + 1) follows from f(c), the proposition f(x) is therefore true of all cardinal numbers: but: “the proposition f(x) holds for all cardinal numbers” means “it holds for x = 1, and f(c + 1) follows from f(c)”. (PG 406) In a proof by mathematical induction, we do no actually prove the ‘proposition’ [e.g., ∀nφ(n)] that is customarily construed as the conclusion of the proof (PG 406, 374; PR §164), rather this pseudo-proposition or ‘statement’ stands ‘proxy’ for the “infinite possibility” (i.e., “the induction”) that we come to ‘see’ by means of the proof (WVC 135). “I want to say,” Wittgenstein concludes, that “once you’ve got the induction, it's all over” (PG 407). Thus, on Wittgenstein's account, a particular proof by mathematical induction should be understood in the following way. Inductive Base: φ(1) Inductive Step: φ(n) → φ(n + 1) Proxy Statement: φ(m) Here the ‘conclusion’ of an inductive proof [i.e., “what is to be proved” (PR §164)] uses ‘m’ rather than ‘n’ to indicate that ‘m’ stands for any particular number, while ‘n’ stands for any arbitrary number. For Wittgenstein, the proxy statement “φ(m)” is not a mathematical proposition that “assert[s] its generality” (PR §168), it is an eliminable pseudo-proposition standing proxy for the proved inductive base and inductive step. Though an inductive proof cannot prove “the infinite possibility of application” (PR §163), it enables us “to perceive” that a direct proof of any particular proposition can be constructed (PR §165). For example, once we have proved “φ(1)” and “φ(n) → φ(n + 1),” we need not reiterate modus ponens m − 1 times to prove the particular proposition “φ(m)” (PR §164). The direct proof of, say, “φ(714)” (i.e., without 713 iterations of modus ponens) “cannot have a still better proof, say, by my carrying out the derivation as far as this proposition itself” ( PR §165). A second, very important impetus for Wittgenstein's radically constructivist position on mathematical induction is his rejection of an undecidable mathematical proposition. In discussions of the provability of mathematical propositions it is sometimes said that there are substantial propositions of mathematics whose truth or falsehood must remain undecided. What the people who say that don't realize is that such propositions, if we can use them and want to call them “propositions”, are not at all the same as what are called “propositions” in other cases; because a proof alters the grammar of a proposition. (PG 367) In this passage, Wittgenstein is alluding to Brouwer, who, as early as 1907 and 1908, states, first, that “the question of the validity of the principium tertii exclusi is equivalent to the question whether unsolvable mathematical problems exist,” second, that “[t]here is not a shred of a proof for the conviction… that there exist no unsolvable mathematical problems,” and, third, that there are meaningful propositions/‘questions,’ such as “Do there occur in the decimal expansion of π infinitely many pairs of consecutive equal digits?”, to which the Law of the Excluded Middle does not apply because “it must be considered as uncertain whether problems like [this] are solvable” (Brouwer, 1908 [1975, 109–110]). ‘A fortiori it is not certain that any mathematical problem can either be solved or proved to be unsolvable,’ Brouwer says (1907 [1975, 79]), ‘though HILBERT, in “Mathematische Probleme”, believes that every mathematician is deeply convinced of it.’ Wittgenstein takes the same data and, in a way, draws the opposite conclusion. If, as Brouwer says, we are uncertain whether all or some “mathematical problems” are solvable, then we know that we do not have in hand an applicable decision procedure, which means that the alleged mathematical propositions are not decidable, here and now. “What ‘mathematical questions’ share with genuine questions,” Wittgenstein says (PR §151), “is simply that they can be answered.” This means that if we do not know how to decide an expression, then we do not know how to make it either proved (true) or refuted (false), which means that the Law of the Excluded Middle “doesn't apply” and, therefore, that our expression is not a mathematical proposition. Together, Wittgenstein's finitism and his criterion of algorithmic decidability shed considerable light on his highly controversial remarks about putatively meaningful conjectures such as FLT and GC. GC is not a mathematical proposition because we do not know how to decide it, and if someone like G. H. Hardy says that he ‘believes’ GC is true (PG 381; LFM 123; PI §578), we must answer that s/he only “has a hunch about the possibilities of extension of the present system” (LFM 139)—that one can only believe such an expression is ‘correct’ if one knows how to prove it. The only sense in which GC (or FLT) can be proved is that it can “correspond to a proof by induction,” which means that the unproved inductive step (e.g., “G(n) → G(n + 1)”) and the expression “∀nG(n)” are not mathematical propositions because we have no algorithmic means of looking for an induction (PG 367). A “general proposition” is senseless prior to an inductive proof “because the question would only have made sense if a general method of decision had been known before the particular proof was discovered” (PG 402). Unproved ‘inductions’ or inductive steps are not meaningful propositions because the Law of the Excluded Middle does not hold in the sense that we do not know of a decision procedure by means of which we can prove or refute the expression (PG 400; WVC 82). This position, however, seems to rob us of any reason to search for a ‘decision’ of a meaningless ‘expression’ such as GC. The intermediate Wittgenstein says only that “[a] mathematician is… guided by… certain analogies with the previous system” and that there is nothing “wrong or illegitimate if anyone concerns himself with Fermat's Last Theorem” (WVC 144). If e.g. I have a method for looking at integers that satisfy the equation x^2 + y^2 = z^2, then the formula x^n + y^n = z^n may stimulate me. I may let a formula stimulate me. Thus I shall say, Here there is a stimulus—but not a question. Mathematical problems are always such stimuli. (WVC 144, Jan. 1, 1931) More specifically, a mathematician may let a senseless conjecture such as FLT stimulate her/him if s/he wishes to know whether a calculus can be extended without altering its axioms or rules (LFM What is here going [o]n [in an attempt to decide GC] is an unsystematic attempt at constructing a calculus. If the attempt is successful, I shall again have a calculus in front of me, only a different one from the calculus I have been using so far. [italics added] (WVC 174–75; Sept. 21, 1931) If, e.g., we succeed in proving GC by mathematical induction (i.e., we prove “G(1)” and “G(n) → G(n + 1)”), we will then have a proof of the inductive step, but since the inductive step was not algorithmically decidable beforehand [(PR §§148, 155, 157), (PG 380)], in constructing the proof we have constructed a new calculus, a new calculating machine (WVC 106) in which we now know how to use this new “machine-part” (RFM VI, §13) (i.e., the unsystematically proved inductive step). Before the proof, the inductive step is not a mathematical proposition with sense (in a particular calculus), whereas after the proof the inductive step is a mathematical proposition, with a new, determinate sense, in a newly created calculus. This demarcation of expressions without mathematical sense and proved or refuted propositions, each with a determinate sense in a particular calculus, is a view that Wittgenstein articulates in myriad different ways from 1929 through 1944. Whether or not it is ultimately defensible—and this is an absolutely crucial question for Wittgenstein's Philosophy of Mathematics—this strongly counter-intuitive aspect of Wittgenstein's account of algorithmic decidability, proof, and the sense of a mathematical proposition is a piece with his rejection of predeterminacy in mathematics. Even in the case where we algorithmically decide a mathematical proposition, the connections thereby made do not pre-exist the algorithmic decision, which means that even when we have a “mathematical question” that we decide by decision procedure, the expression only has a determinate sense qua proposition when it is decided. On Wittgenstein's account, both middle and later, “[a] new proof gives the proposition a place in a new system” (RFM VI, §13), it “locates it in the whole system of calculations,” though it “does not mention, certainly does not describe, the whole system of calculation that stands behind the proposition and gives it sense” (RFM VI, §11). Wittgenstein's unorthodox position here is a type of structuralism that partially results from his rejection of mathematical semantics. We erroneously think, e.g., that GC has a fully determinate sense because, given “the misleading way in which the mode of expression of word-language represents the sense of mathematical propositions” (PG 375), we call to mind false pictures and mistaken, referential conceptions of mathematical propositions whereby GC is about a mathematical reality and so has just a determinate sense as “There exist intelligent beings elsewhere in the universe” (i.e., a proposition that is determinately true or false, whether or not we ever know its truth-value). Wittgenstein breaks with this tradition, in all of its forms, stressing that, in mathematics, unlike the realm of contingent (or empirical) propositions, “if I am to know what a proposition like Fermat's last theorem says,” I must know its criterion of truth. Unlike the criterion of truth for an empirical proposition, which can be known before the proposition is decided, we cannot know the criterion of truth for an undecided mathematical proposition, though we are “acquainted with criteria for the truth of similar propositions” (RFM VI, §13). The intermediate Wittgenstein spends a great deal of time wrestling with real and irrational numbers. There are two distinct reasons for this. First, the real reason many of us are unwilling to abandon the notion of the actual infinite in mathematics is the prevalent conception of an irrational number as a necessarily infinite extension. ‘The confusion in the concept of the “actual infinite” arises’ [italics added], says Wittgenstein (PG 471), ‘from the unclear concept of irrational number, that is, from the fact that logically very different things are called “irrational numbers” without any clear limit being given to the concept.’ Second, and more fundamentally, the intermediate Wittgenstein wrestles with irrationals in such detail because he opposes foundationalism and especially its concept of a “gapless mathematical continuum,” its concept of a comprehensive theory of the real numbers (Han 2010), and set theoretical conceptions and ‘proofs’ as a foundation for arithmetic, real number theory, and mathematics as a whole. Indeed, Wittgenstein's discussion of irrationals is one with his critique of set theory, for, as he says, “[m]athematics is ridden through and through with the pernicious idioms of set theory,” such as “the way people speak of a line as composed of points,” when, in fact, “[a] line is a law and isn't composed of anything at all” [(PR §173), (PR §§181, 183, & 191), (PG 373, 460, 461, & 473)]. 2.5.1 Wittgenstein's Anti-Foundationalism and Genuine Irrational Numbers Since, on Wittgenstein's terms, mathematics consists exclusively of extensions and intensions (i.e., ‘rules’ or ‘laws’), an irrational is only an extension insofar as it is a sign (i.e., a ‘numeral,’ such as ‘√2’ or ‘π’). Given that there is no such thing as an infinite mathematical extension, it follows that an irrational number is not a unique infinite expansion, but rather a unique recursive rule or law (PR §181) that yields rational numbers (PR §186; PR §180). The rule for working out places of √2 is itself the numeral for the irrational number; and the reason I here speak of a ‘number’ is that I can calculate with these signs (certain rules for the construction of rational numbers) just as I can with rational numbers themselves. (PG 484) Due, however, to his anti-foundationalism, Wittgenstein takes the radical position that not all recursive real numbers (i.e., computable numbers) are genuine real numbers—a position that distinguishes his view from even Brouwer's. The problem, as Wittgenstein sees it, is that mathematicians, especially foundationalists (e.g., set theorists), have sought to accommodate physical continuity by a theory that ‘describes’ the mathematical continuum (PR §171). When, for example, we think of continuous motion and the (mere) density of the rationals, we reason that if an object moves continuously from A to B, and it travels only the distances marked by “rational points,” then it must skip some distances (intervals, or points) not marked by rational numbers. But if an object in continuous motion travels distances that cannot be commensurately measured by rationals alone, there must be ‘gaps’ between the rationals (PG 460), and so we must fill them, first, with recursive irrationals, and then, because “the set of all recursive irrationals” still leaves gaps, with “lawless irrationals.” [T]he enigma of the continuum arises because language misleads us into applying to it a picture that doesn't fit. Set theory preserves the inappropriate picture of something discontinuous, but makes statements about it that contradict the picture, under the impression that it is breaking with prejudices; whereas what should really have been done is to point out that the picture just doesn't fit… (PG 471) We add nothing that is needed to the differential and integral calculi by ‘completing’ a theory of real numbers with pseudo-irrationals and lawless irrationals, first because there are no gaps on the number line [(PR §§181, 183, & 191), (PG 373, 460, 461, & 473), (WVC 35)] and, second, because these alleged irrational numbers are not needed for a theory of the ‘continuum’ simply because there is no mathematical continuum. As the later Wittgenstein says (RFM V, §32), “[t]he picture of the number line is an absolutely natural one up to a certain point; that is to say so long as it is not used for a general theory of real numbers.” We have gone awry by misconstruing the nature of the geometrical line as a continuous collection of points, each with an associated real number, which has taken us well beyond the ‘natural’ picture of the number line in search of a “general theory of real numbers” (Han 2010). Thus, the principal reason Wittgenstein rejects certain constructive (computable) numbers is that they are unnecessary creations which engender conceptual confusions in mathematics (especially set theory). One of Wittgenstein's main aims in his lengthy discussions of rational numbers and pseudo-irrationals is to show that pseudo-irrationals, which are allegedly needed for the mathematical continuum, are not needed at all. To this end, Wittgenstein demands (a) that a real number must be “compar[able] with any rational number taken at random” (i.e., “it can be established whether it is greater than, less than, or equal to a rational number” (PR §191)) and (b) that “[a] number must measure in and of itself” and if a ‘number’ “leaves it to the rationals, we have no need of it” (PR §191) [(Frascolla 1980, 242–243); (Shanker 1987, 186–192); (Da Silva 1993, 93–94); (Marion 1995a, 162, 164); (Rodych 1999b, 281–291); (Lampert 2009)]. To demonstrate that some recursive (computable) reals are not genuine real numbers because they fail to satisfy (a) and (b), Wittgenstein defines the putative recursive real number as the rule “Construct the decimal expansion for √2, replacing every occurrence of a ‘5’ with a ‘3’” (PR §182); he similarly defines π′ as (PR §186) and, in a later work, redefines π′ as (PG 475). Although a pseudo-irrational such as π′ (on either definition) is “as unambiguous as… π or √2” (PG 476), it is ‘homeless’ according to Wittgenstein because, instead of using “the idioms of arithmetic” (PR §186), it is dependent upon the particular ‘incidental’ notation of a particular system (i.e., in some particular base) [(PR §188), (PR §182), and (PG 475)]. If we speak of various base-notational systems, we might say that π belongs to all systems, while π′ belongs only to one, which shows that π′ is not a genuine irrational because “there can't be irrational numbers of different types” (PR §180). Furthermore, pseudo-irrationals do not measure because they are homeless, artificial constructions parasitic upon numbers which have a natural place in a calculus that can be used to measure. We simply do not need these aberrations, because they are not sufficiently comparable to rationals and genuine irrationals. They are not irrational numbers according to Wittgenstein's criteria, which define, Wittgenstein interestingly asserts, “precisely what has been meant or looked for under the name ‘irrational number’” (PR §191). For exactly the same reason, if we define a “lawless irrational” as either (a) a non-rule-governed, non-periodic, infinite expansion in some base, or (b) a “free-choice sequence,” Wittgenstein rejects “lawless irrationals” because, insofar as they are not rule-governed, they are not comparable to rationals (or irrationals) and they are not needed. “[W]e cannot say that the decimal fractions developed in accordance with a law still need supplementing by an infinite set of irregular infinite decimal fractions that would be ‘brushed under the carpet’ if we were to restrict ourselves to those generated by a law,” Wittgenstein argues, for “[w]here is there such an infinite decimal that is generated by no law” “[a]nd how would we notice that it was missing?” (PR §181; cf. PG 473, 483–84). Similarly, a free-choice sequence, like a recipe for “endless bisection” or “endless dicing,” is not an infinitely complicated mathematical law (or rule), but rather no law at all, for after each individual throw of a coin, the point remains “infinitely indeterminate” (PR §186). For closely related reasons, Wittgenstein ridicules the Multiplicative Axiom (Axiom of Choice) both in the middle period (PR §146) and in the latter period (RFM V, §25; VII, §33). 2.5.2 Wittgenstein's Real Number Essentialism and the Dangers of Set Theory Superficially, at least, it seems as if Wittgenstein is offering an essentialist argument for the conclusion that real number arithmetic should not be extended in such-and-such a way. Such an essentialist account of real and irrational numbers seems to conflict with the actual freedom mathematicians have to extend and invent, with Wittgenstein's intermediate claim (PG 334) that “[f]or [him] one calculus is as good as another,” and with Wittgenstein's acceptance of complex and imaginary numbers. Wittgenstein's foundationalist critic (e.g., set theorist) will undoubtedly say that we have extended the term “irrational number” to lawless and pseudo-irrationals because they are needed for the mathematical continuum and because such “conceivable numbers” are much more like rule-governed irrationals than rationals. Though Wittgenstein stresses differences where others see similarities (LFM 15), in his intermediate attacks on pseudo-irrationals and foundationalism, he is not just emphasizing differences, he is attacking set theory's “pernicious idioms” (PR §173) and its “crudest imaginable misinterpretation of its own calculus” (PG 469–70) in an attempt to dissolve “misunderstandings without which [set theory] would never have been invented,” since it is “of no other use” (LFM 16–17). Complex and imaginary numbers have grown organically within mathematics, and they have proved their mettle in scientific applications, but pseudo-irrationals are inorganic creations invented solely for the sake of mistaken foundationalist aims. Wittgenstein's main point is not that we cannot create further recursive real numbers—indeed, we can create as many as we want—his point is that we can only really speak of different systems (sets) of real numbers (RFM II, §33) that are enumerable by a rule, and any attempt to speak of “the set of all real numbers” or any piecemeal attempt to add or consider new recursive reals (e.g., diagonal numbers) is a useless and/or futile endeavour based on foundational misconceptions. Indeed, in 1930 MS and TS passages on irrationals and Cantor's diagonal, which were not included in PR or PG, Wittgenstein says: “The concept “irrational number” is a dangerous pseudo-concept” (MS 108, 176; 1930; TS 210, 29; 1930). As we shall see in the next section, on Wittgenstein's account, if we do not understand irrationals rightly, we cannot but engender the mistakes that constitute set theory. Wittgenstein's critique of set theory begins somewhat benignly in the Tractatus, where he denounces Logicism and says (6.031) that “[t]he theory of classes is completely superfluous in mathematics” because, at least in part, “the generality required in mathematics is not accidental generality.” In his middle period, Wittgenstein begins a full-out assault on set theory that never abates. Set theory, he says, is “utter nonsense” (PR §§145, 174; WVC 102; PG 464, 470), ‘wrong’ (PR §174), and ‘laughable’ (PG 464); its “pernicious idioms” (PR §173) mislead us and the crudest possible misinterpretation is the very impetus of its invention (Hintikka 1993, 24, 27). Wittgenstein's intermediate critique of transfinite set theory (hereafter “set theory”) has two main components: (1) his discussion of the intension-extension distinction, and (2) his criticism of non-denumerability as cardinality. Late in the middle period, Wittgenstein seems to become more aware of the unbearable conflict between his strong formalism (PG 334) and his denigration of set theory as a purely formal, non-mathematical calculus (Rodych 1997, 217-219), which, as we shall see in Section 3.5, leads to the use of an extra-mathematical application criterion to demarcate transfinite set theory (and other purely formal sign-games) from mathematical calculi. 2.6.1 Intensions, Extensions, and the Fictitious Symbolism of Set Theory The search for a comprehensive theory of the real numbers and mathematical continuity has led to a “fictitious symbolism” (PR §174). Set theory attempts to grasp the infinite at a more general level than the investigation of the laws of the real numbers. It says that you can't grasp the actual infinite by means of mathematical symbolism at all and therefore it can only be described and not represented. … One might say of this theory that it buys a pig in a poke. Let the infinite accommodate itself in this box as best it can. (PG 468; cf. PR §170) As Wittgenstein puts it at (PG 461), “the mistake in the set-theoretical approach consists time and again in treating laws and enumerations (lists) as essentially the same kind of thing and arranging them in parallel series so that one fills in gaps left by the other.” This is a mistake because it is ‘nonsense’ to say “we cannot enumerate all the numbers of a set, but we can give a description,” for “[t]he one is not a substitute for the other” (WVC 102; June 19, 1930); “there isn't a dualism [of] the law and the infinite series obeying it” (PR §180). “Set theory is wrong” and nonsensical (PR §174), says Wittgenstein, because it presupposes a fictitious symbolism of infinite signs (PG 469) instead of an actual symbolism with finite signs. The grand intimation of set theory, which begins with “Dirichlet's concept of a function” (WVC 102–03), is that we can in principle represent an infinite set by an enumeration, but because of human or physical limitations, we will instead describe it intensionally. But, says Wittgenstein, “[t]here can't be possibility and actuality in mathematics,” for mathematics is an actual calculus, which “is concerned only with the signs with which it actually operates” (PG 469). As Wittgenstein puts it at (PR §159), the fact that “we can't describe mathematics, we can only do it” in and “of itself abolishes every ‘set theory’.” Perhaps the best example of this phenomenon is Dedekind, who in giving his ‘definition of an “infinite class” as “a class which is similar to a proper subclass of itself” (PG 464), “tried to describe an infinite class” (PG 463). If, however, we try to apply this ‘definition’ to a particular class in order to ascertain whether it is finite or infinite, the attempt is ‘laughable’ if we apply it to a finite class, such as “a certain row of trees,” and it is ‘nonsense’ if we apply it to “an infinite class,” for we cannot even attempt “to co-ordinate it” (PG 464), because “the relation m = 2n [does not] correlate the class of all numbers with one of its subclasses” (PR §141), it is an “infinite process” which “correlates any arbitrary number with another.” So, although we can use m = 2n on the rule for generating the naturals (i.e., our domain) and thereby construct the pairs (2,1), (4,2), (6,3), (8,4), etc., in doing so we do not correlate two infinite sets or extensions (WVC 103). If we try to apply Dedekind's definition as a criterion for determining whether a given set is infinite by establishing a 1–1 correspondence between two inductive rules for generating “infinite extensions,” one of which is an “extensional subset” of the other, we can't possibly learn anything we didn't already know when we applied the ‘criterion’ to two inductive rules. If Dedekind or anyone else insists on calling an inductive rule an “infinite set,” he and we must still mark the categorical difference between such a set and a finite set with a determinate, finite cardinality. Indeed, on Wittgenstein's account, the failure to properly distinguish mathematical extensions and intensions is the root cause of the mistaken interpretation of Cantor's diagonal proof as a proof of the existence of infinite sets of lesser and greater cardinality. 2.6.2 Against Non-Denumerability Wittgenstein's criticism of non-denumerability is primarily implicit during the middle period. Only after 1937 does he provide concrete arguments purporting to show, e.g., that Cantor's diagonal cannot prove that some infinite sets have greater ‘multiplicity’ than others. Nonetheless, the intermediate Wittgenstein clearly rejects the notion that a non-denumerably infinite set is greater in cardinality than a denumerably infinite set. When people say ‘The set of all transcendental numbers is greater than that of algebraic numbers’, that's nonsense. The set is of a different kind. It isn't ‘no longer’ denumerable, it's simply not denumerable! (PR §174) As with his intermediate views on genuine irrationals and the Multiplicative Axiom, Wittgenstein here looks at the diagonal proof of the non-denumerability of “the set of transcendental numbers” as one that shows only that transcendental numbers cannot be recursively enumerated. It is nonsense, he says, to go from the warranted conclusion that these numbers are not, in principle, enumerable to the conclusion that the set of transcendental numbers is greater in cardinality than the set of algebraic numbers, which is recursively enumerable. What we have here are two very different conceptions of a number-type. In the case of algebraic numbers, we have a decision procedure for determining of any given number whether or not it is algebraic, and we have a method of enumerating the algebraic numbers such that we can see that ‘each’ algebraic number “will be” enumerated. In the case of transcendental numbers, on the other hand, we have proofs that some numbers are transcendental (i.e., non-algebraic), and we have a proof that we cannot recursively enumerate each and every thing we would call a “transcendental number.” At (PG 461), Wittgenstein similarly speaks of set theory's “mathematical pseudo-concepts” leading to a fundamental difficulty, which begins when we unconsciously presuppose that there is sense to the idea of ordering the rationals by size—“that the attempt is thinkable”—and culminates in similarly thinking that it is possible to enumerate the real numbers, which we then discover is impossible. Though the intermediate Wittgenstein certainly seems highly critical of the alleged proof that some infinite sets (e.g., the reals) are greater in cardinality than other infinite sets, and though he discusses the “diagonal procedure” in February 1929 and in June 1930 (MS 106, 266; MS 108, 180), along with a diagonal diagram, these and other early-middle ruminations did not make it into the typescripts for either PR or PG. As we shall see in Section 3.4, the later Wittgenstein analyzes Cantor's diagonal and claims of non-denumerability in some detail. The first and most important thing to note about Wittgenstein's later Philosophy of Mathematics is that RFM, first published in 1956, consists of selections taken from a number of MSS (1937–1944), most of one large typescript (1938), and three short typescripts (1938), each of which constitutes an Appendix to (RFM I). For this reason and because some MSS containing much material on mathematics (e.g., (MS 123)) were not used at all for RFM, philosophers have not been able to read Wittgenstein's later remarks on mathematics as they were written in the MSS used for RFM and they have not had access (until the 2000–2001 release of the Nachlass on CD-ROM) to much of Wittgenstein's later work on mathematics. It must be emphasized, therefore, that this Encyclopedia article is being written during a transitional period. Until philosophers have used the Nachlass to build a comprehensive picture of Wittgenstein's complete and evolving Philosophy of Mathematics, we will not be able to say definitively which views the later Wittgenstein retained, which he changed, and which he dropped. In the interim, this article will outline Wittgenstein's later Philosophy of Mathematics, drawing primarily on RFM, to a much lesser extent LFM (1939 Cambridge lectures), and, where possible, previously unpublished material in Wittgenstein's Nachlass. It should also be noted at the outset that commentators disagree about the continuity of Wittgenstein's middle and later Philosophies of Mathematics. Some argue that the later views are significantly different from the intermediate views [(Frascolla 1994), (Gerrard 1991, 127, 131–32), (Floyd 2005, 105–106)], while others argue that, for the most part, Wittgenstein's Philosophy of Mathematics evolves from the middle to the later period without significant changes or renunciations [(Wrigley 1993), (Marion 1998), (Rodych 1997, 2000a, 2000b)]. The remainder of this article adopts the second interpretation, explicating Wittgenstein's later Philosophy of Mathematics as largely continuous with his intermediate views, except for the important introduction of an extra-mathematical application criterion. Perhaps the most important constant in Wittgenstein's Philosophy of Mathematics, middle and late, is that he consistently maintains that mathematics is our, human invention, and that, indeed, everything in mathematics is invented. Just as the middle Wittgenstein says that “[w]e make mathematics,” the later Wittgenstein says that we ‘invent’ mathematics (RFM I, §168; II, §38; V, §§5, 9 and 11; PG 469–70) and that “the mathematician is not a discoverer: he is an inventor” (RFM, Appendix II, §2; (LFM 22, 82). Nothing exists mathematically unless and until we have invented it. In arguing against mathematical discovery, Wittgenstein is not just rejecting Platonism, he is also rejecting a rather standard philosophical view according to which human beings invent mathematical calculi, but once a calculus has been invented, we thereafter discover finitely many of its infinitely many provable and true theorems. As Wittgenstein himself asks (RFM IV, §48), “might it not be said that the rules lead this way, even if no one went it?” If “someone produced a proof [of “Goldbach's theorem”],” “[c]ouldn't one say,” Wittgenstein asks (LFM 144), “that the possibility of this proof was a fact in the realms of mathematical reality”—that “[i]n order [to] find it, it must in some sense be there”—“[i]t must be a possible structure”? Unlike many or most philosophers of mathematics, Wittgenstein resists the ‘Yes’ answer that we discover truths about a mathematical calculus that come into existence the moment we invent the calculus [(PR §141), (PG 283, 466), (LFM 139)]. Wittgenstein rejects the modal reification of possibility as actuality—that provability and constructibility are (actual) facts—by arguing that it is at the very least wrong-headed to say with the Platonist that because “a straight line can be drawn between any two points,… the line already exists even if no one has drawn it”—to say “[w]hat in the ordinary world we call a possibility is in the geometrical world a reality” (LFM 144; RFM I, §21). One might as well say, Wittgenstein suggests (PG 374), that “chess only had to be discovered, it was always there!” At (MS 122, 3v; Oct. 18, 1939), Wittgenstein once again emphasizes the difference between illusory mathematical discovery and genuine mathematical invention. I want to get away from the formulation: “I now know more about the calculus”, and replace it with “I now have a different calculus”. The sense of this is always to keep before one's eyes the full scale of the gulf between a mathematical knowing and non-mathematical knowing.^[3] And as with the middle period, the later Wittgenstein similarly says (MS 121, 27r; May 27, 1938) that “[i]t helps if one says: the proof of the Fermat proposition is not to be discovered, but to be The difference between the ‘anthropological’ and the mathematical account is that in the first we are not tempted to speak of ‘mathematical facts,’ but rather that in this account the facts are never mathematical ones, never make mathematical propositions true or false. (MS 117, 263; March 15, 1940) There are no mathematical facts just as there are no (genuine) mathematical propositions. Repeating his intermediate view, the later Wittgenstein says (MS 121, 71v; 27 Dec., 1938): “Mathematics consists of [calculi | calculations], not of propositions.” This radical constructivist conception of mathematics prompts Wittgenstein to make notorious remarks—remarks that virtually no one else would make—such as the infamous (RFM V, §9): “However queer it sounds, the further expansion of an irrational number is a further expansion of mathematics.” 3.1.1 Wittgenstein's Later Anti-Platonism: The Natural History of Numbers and the Vacuity of Platonism As in the middle period, the later Wittgenstein maintains that mathematics is essentially syntactical and non-referential, which, in and of itself, makes Wittgenstein's philosophy of mathematics anti-Platonist insofar as Platonism is the view that mathematical terms and propositions refer to objects and/or facts and that mathematical propositions are true by virtue of agreeing with mathematical facts. The later Wittgenstein, however, wishes to ‘warn’ us that our thinking is saturated with the idea of “[a]rithmetic as the natural history (mineralogy) of numbers” (RFM IV, §11). When, for instance, Wittgenstein discusses the claim that fractions cannot be ordered by magnitude, he says that this sounds ‘remarkable’ in a way that a mundane proposition of the differential calculus does not, for the latter proposition is associated with an application in physics, “whereas this proposition… seems to [‘solely’] concern… the natural history of mathematical objects themselves” (RFM II, §40). Wittgenstein stresses that he is trying to ‘warn’ us against this ‘aspect’—the idea that the foregoing proposition about fractions “introduces us to the mysteries of the mathematical world,” which exists somewhere as a completed totality, awaiting our prodding and our discoveries. The fact that we regard mathematical propositions as being about mathematical objects and mathematical investigation “as the exploration of these objects” is “already mathematical alchemy,” claims Wittgenstein (RFM V, §16), since “it is not possible to appeal to the meaning [‘Bedeutung’] of the signs in mathematics,… because it is only mathematics that gives them their meaning [‘Bedeutung’].” Platonism is dangerously misleading, according to Wittgenstein, because it suggests a picture of pre -existence, predetermination and discovery that is completely at odds with what we find if we actually examine and describe mathematics and mathematical activity. “I should like to be able to describe,” says Wittgenstein (RFM IV, §13), “how it comes about that mathematics appears to us now as the natural history of the domain of numbers, now again as a collection of rules.” Wittgenstein, however, does not endeavour to refute Platonism. His aim, instead, is to clarify what Platonism is and what it says, implicitly and explicitly (including variants of Platonism that claim, e.g., that if a proposition is provable in an axiom system, then there already exists a path [i.e., a proof] from the axioms to that proposition [(RFM I, §21); (Marion 1998, 13–14, 226), (Rodych 1997; 2000b, 267–280), (Steiner 2000, 334)]). Platonism is either “a mere truism” (LFM 239), Wittgenstein says, or it is a ‘picture’ consisting of “an infinity of shadowy worlds” (LFM 145), which, as such, lacks ‘utility’ (cf. PI §254) because it explains nothing and it misleads at every turn. Though commentators and critics do not agree as to whether the later Wittgenstein is still a finitist and whether, if he is, his finitism is as radical as his intermediate rejection of unbounded mathematical quantification (Maddy 1986, 300–301, 310), the overwhelming evidence indicates that the later Wittgenstein still rejects the actual infinite (RFM V, §21; Zettel §274, 1947) and infinite mathematical extensions. The first, and perhaps most definitive, indication that the later Wittgenstein maintains his finitism is his continued and consistent insistence that irrational numbers are rules for constructing finite expansions, not infinite mathematical extensions. “The concepts of infinite decimals in mathematical propositions are not concepts of series,” says Wittgenstein (RFM V, §19), “but of the unlimited technique of expansion of series.” We are misled by “[t]he extensional definitions of functions, of real numbers etc.” (RFM V, §35), but once we recognize the Dedekind cut as “an extensional image,” we see that we are not “led to √2 by way of the concept of a cut” (RFM V, §34). On the later Wittgenstein's account, there simply is no property, no rule, no systematic means of defining each and every irrational number intensionally, which means there is no criterion “for the irrational numbers being complete” (PR §181). As in his intermediate position, the later Wittgenstein claims that ‘ℵ[0]’ and “infinite series” get their mathematical uses from the use of ‘infinity’ in ordinary language (RFM II, §60). Although, in ordinary language, we often use ‘infinite’ and “infinitely many” as answers to the question “how many?,” and though we associate infinity with the enormously large, the principal use we make of ‘infinite’ and ‘infinity’ is to speak of the unlimited (RFM V, §14) and unlimited techniques (RFM II, §45; PI §218). This fact is brought out by the fact “that the technique of learning ℵ[0] numerals is different from the technique of learning 100,000 numerals” (LFM 31). When we say, e.g., that “there are an infinite number of even numbers” we mean that we have a mathematical technique or rule for generating even numbers which is limitless, which is markedly different from a limited technique or rule for generating a finite number of numbers, such as 1–100,000,000. “We learn an endless technique,” says Wittgenstein (RFM V, §19), “but what is in question here is not some gigantic extension.” An infinite sequence, for example, is not a gigantic extension because it is not an extension, and ‘ℵ[0]’ is not a cardinal number, for “how is this picture connected with the calculus,” given that “its connexion is not that of the picture | | | | with 4” (i.e., given that ‘ℵ[0]’ is not connected to a (finite) extension)? This shows, says Wittgenstein (RFM II, §58), that we ought to avoid the word ‘infinite’ in mathematics wherever it seems to give a meaning to the calculus, rather than acquiring its meaning from the calculus and its use in the calculus. Once we see that the calculus contains nothing infinite, we should not be ‘disappointed’ (RFM II, §60), but simply note (RFM II, §59) that it is not “really necessary… to conjure up the picture of the infinite (of the enormously A second strong indication that the later Wittgenstein maintains his finitism is his continued and consistent treatment of ‘propositions’ of the type “There are three consecutive 7s in the decimal expansion of π” (hereafter ‘PIC’).^[4] In the middle period, PIC (and its putative negation, ¬PIC, namely, “It is not the case that there are three consecutive 7s in the decimal expansion of π”) is not a meaningful mathematical “statement at all” (WVC 81–82: Footnote #1). On Wittgenstein's intermediate view, PIC—like FLT, GC, and the Fundamental Theorem of Algebra—is not a mathematical proposition because we do not have in hand an applicable decision procedure by which we can decide it in a particular calculus. For this reason, we can only meaningfully state finitistic propositions regarding the expansion of π, such as “There exist three consecutive 7s in the first 10,000 places of the expansion of π” (WVC 71; 81–82, Footnote #1). The later Wittgenstein maintains this position in various passages in RFM (Bernays 1959 [1986, 176]). For example, to someone who says that since “the rule of expansion determine[s] the series completely,” “it must implicitly determine all questions about the structure of the series,” Wittgenstein replies: “Here you are thinking of finite series” (RFM V, §11). If PIC were a mathematical question (or problem)—if it were finitistically restricted—it would be algorithmically decidable, which it is not [(RFM V, §21), (LFM 31–32, 111, 170), (WVC 102–03)]. As Wittgenstein says at (RFM V, §9): “The question… changes its status, when it becomes decidable,” “[f]or a connexion is made then, which formerly was not there.” And if, moreover, one invokes the Law of the Excluded Middle to establish that PIC is a mathematical proposition—i.e., by saying that one of these “two pictures… must correspond to the fact” (RFM V, §10)—one simply begs the question (RFM V, §12), for if we have doubts about the mathematical status of PIC, we will not be swayed by a person who asserts “PIC ∨ ¬PIC” (RFM VII, §41; V, §13). Wittgenstein's finitism, constructivism, and conception of mathematical decidability are interestingly connected at (RFM VII, §41, par. 2–5). What harm is done e.g. by saying that God knows all irrational numbers? Or: that they are already there, even though we only know certain of them? Why are these pictures not harmless? For one thing, they hide certain problems.— (MS 124, p. 139; March 16, 1944) Suppose that people go on and on calculating the expansion of π. So God, who knows everything, knows whether they will have reached ‘777’ by the end of the world. But can his omniscience decide whether they would have reached it after the end of the world? It cannot. I want to say: Even God can determine something mathematical only by mathematics. Even for him the mere rule of expansion cannot decide anything that it does not decide for us. We might put it like this: if the rule for the expansion has been given us, a calculation can tell us that there is a ‘2’ at the fifth place. Could God have known this, without the calculation, purely from the rule of expansion? I want to say: No. (MS 124, pp. 175–176; March 23–24, 1944) What Wittgenstein means here is that God's omniscience might, by calculation, find that ‘777’ occurs at the interval [n,n+2], but, on the other hand, God might go on calculating forever without ‘777’ ever turning up. Since π is not a completed infinite extension that can be completely surveyed by an omniscient being (i.e., it is not a fact that can be known by an omniscient mind), even God has only the rule, and so God's omniscience is no advantage in this case [(LFM 103–04); cf. (Weyl, 1921 [1998, 97])]. Like us, with our modest minds, an omniscient mind (i.e., God) can only calculate the expansion of π to some n^th decimal place—where our n is minute and God's n is (relatively) enormous—and at no n^th decimal place could any mind rightly conclude that because ‘777’ has not turned up, it, therefore, will never turn up. On one fairly standard interpretation, the later Wittgenstein says that “true in calculus Γ“ is identical to “provable in calculus Γ” and, therefore, that a mathematical proposition of calculus Γ is a concatenation of signs that is either provable (in principle) or refutable (in principle) in calculus Γ [(Goodstein 1972, 279, 282), (Anderson 1958, 487), (Klenk 1976, 13), (Frascolla 1994, 59)]. On this interpretation, the later Wittgenstein precludes undecidable mathematical propositions, but he allows that some undecided expressions are propositions of a calculus because they are decidable in principle (i.e., in the absence of a known, applicable decision procedure). There is considerable evidence, however, that the later Wittgenstein maintains his intermediate position that an expression is a meaningful mathematical proposition only within a given calculus and iff we knowingly have in hand an applicable and effective decision procedure by means of which we can decide it. For example, though Wittgenstein vacillates between “provable in PM” and “proved in PM” at (RFM App. III, §6, §8), he does so in order to use the former to consider the alleged conclusion of Gödel's proof (i.e., that there exist true but unprovable mathematical propositions), which he then rebuts with his own identification of “true in calculus Γ” with “proved in calculus Γ” (i.e., not with “provable in calculus Γ“) [(Wang 1991, 253), (Rodych 1999a, 177)]. This construal is corroborated by numerous passages in which Wittgenstein rejects the received view that a provable but unproved proposition is true, as he does when he asserts that (RFM III, §31, 1939) a proof “makes new connexions,” “[i]t does not establish that they are there” because “they do not exist until it makes them,” and when he says (RFM VII, §10, 1941) that “[a] new proof gives the proposition a place in a new system.” Furthermore, as we have just seen, Wittgenstein rejects PIC as a non-proposition on the grounds that it is not algorithmically decidable, while admitting finitistic versions of PIC because they are algorithmically decidable. Perhaps the most compelling evidence that the later Wittgenstein maintains algorithmic decidability as his criterion for a mathematical proposition lies in the fact that, at (RFM V, §9, 1942), he says in two distinct ways that a mathematical ‘question’ can become decidable and that when this happens, a new connexion is ‘made’ which previously did not exist. Indeed, Wittgenstein cautions us against appearances by saying that “it looks as if a ground for the decision were already there,” when, in fact, “it has yet to be invented.” These passages strongly militate against the claim that the later Wittgenstein grants that proposition φ is decidable in calculus Γ iff it is provable or refutable in principle. Moreover, if Wittgenstein held this position, he would claim, contra (RFM V, §9), that a question or proposition does not become decidable since it simply (always) is decidable. If it is provable, and we simply don't yet know this to be the case, there already is a connection between, say, our axioms and rules and the proposition in question. What Wittgenstein says, however, is that the modalities provable and refutable are shadowy forms of reality—that possibility is not actuality in mathematics [(PR §§141, 144, 172), (PG 281, 283, 299, 371, 466, 469)], (LFM 139)]. Thus, the later Wittgenstein agrees with the intermediate Wittgenstein that the only sense in which an undecided mathematical proposition (RFM VII, §40, 1944) can be decidable is in the sense that we know how to decide it by means of an applicable decision procedure. Largely a product of his anti-foundationalism and his criticism of the extension-intension conflation, Wittgenstein's later critique of set theory is highly consonant with his intermediate critique [(PR §§109, 168), (PG 334, 369, 469), (LFM 172, 224, 229), and (RFM III, §43, 46, 85, 90; VII, §16)]. Given that mathematics is a “MOTLEY of techniques of proof” (RFM III, §46), it does not require a foundation (RFM VII, §16) and it cannot be given a self-evident foundation [(PR §160), (WVC 34 & 62), (RFM IV, §3)]. Since set theory was invented to provide mathematics with a foundation, it is, minimally, unnecessary. Even if set theory is unnecessary, it still might constitute a solid foundation for mathematics. In his core criticism of set theory, however, the later Wittgenstein denies this, saying that the diagonal proof does not prove non-denumerability, for “[i]t means nothing to say: “Therefore the X numbers are not denumerable” (RFM II, §10). When the diagonal is construed as a proof of greater and lesser infinite sets it is a “puffed-up proof,” which, as Poincaré argued (1913b, 61–62), purports to prove or show more than “its means allow it” (RFM II, §21). If it were said: “Consideration of the diagonal procedure shews you that the concept ‘real number’ has much less analogy with the concept ‘cardinal number’ than we, being misled by certain analogies, are inclined to believe”, that would have a good and honest sense. But just the opposite happens: one pretends to compare the ‘set’ of real numbers in magnitude with that of cardinal numbers. The difference in kind between the two conceptions is represented, by a skew form of expression, as difference of extension. I believe, and hope, that a future generation will laugh at this hocus pocus. (RFM II, §22) The sickness of a time is cured by an alteration in the mode of life of human beings… (RFM II, §23) The “hocus pocus” of the diagonal proof rests, as always for Wittgenstein, on a conflation of extension and intension, on the failure to properly distinguish sets as rules for generating extensions and (finite) extensions. By way of this confusion “a difference in kind” (i.e., unlimited rule vs. finite extension) “is represented by a skew form of expression,” namely as a difference in the cardinality of two infinite extensions. Not only can the diagonal not prove that one infinite set is greater in cardinality than another infinite set, according to Wittgenstein, nothing could prove this, simply because “infinite sets” are not extensions, and hence not infinite extensions. But instead of interpreting Cantor's diagonal proof honestly, we take the proof to “show there are numbers bigger than the infinite,” which “sets the whole mind in a whirl, and gives the pleasant feeling of paradox” (LFM 16–17)—a “giddiness attacks us when we think of certain theorems in set theory”—“when we are performing a piece of logical sleight-of-hand” (PI §412; §426; 1945). This giddiness and pleasant feeling of paradox, says Wittgenstein (LFM 16), “may be the chief reason [set theory] was Though Cantor's diagonal is not a proof of non-denumerability, when it is expressed in a constructive manner, as Wittgenstein himself expresses it at (RFM II, §1), “it gives sense to the mathematical proposition that the number so-and-so is different from all those of the system” (RFM II, §29). That is, the proof proves non-enumerability: it proves that for any given definite real number concept (e.g., recursive real), one cannot enumerate ‘all’ such numbers because one can always construct a diagonal number, which falls under the same concept and is not in the enumeration. “One might say,” Wittgenstein says, “I call number-concept X non-denumerable if it has been stipulated that, whatever numbers falling under this concept you arrange in a series, the diagonal number of this series is also to fall under that concept” (RFM II, §10; cf. II, §§30, 31, 13). One lesson to be learned from this, according to Wittgenstein (RFM II, §33), is that “there are diverse systems of irrational points to be found in the number line,” each of which can be given by a recursive rule, but “no system of irrational numbers,” and “also no super-system, no ‘set of irrational numbers’ of higher-order infinity.” Cantor has shown that we can construct “infinitely many” diverse systems of irrational numbers, but we cannot construct an exhaustive system of all the irrational numbers (RFM II, §29). As Wittgenstein says at (MS 121, 71r; Dec. 27, 1938), three pages after the passage used for (RFM II, §57): “If you now call the Cantorian procedure one for producing a new real number, you will now no longer be inclined to speak of a system of all real numbers” (italics added). From Cantor's proof, however, set theorists erroneously conclude that “the set of irrational numbers” is greater in multiplicity than any enumeration of irrationals (or the set of rationals), when the only conclusion to draw is that there is no such thing as the set of all the irrational numbers. The truly dangerous aspect to ‘propositions’ such as “The real numbers cannot be arranged in a series” and “The set… is not denumerable” is that they make concept formation [i.e., our invention] “look like a fact of nature” (i.e., something we discover) (RFM II §§16, 37). At best, we have a vague idea of the concept of “real number,” but only if we restrict this idea to “recursive real number” and only if we recognize that having the concept does not mean having a set of all recursive real numbers. The principal and most significant change from the middle to later writings on mathematics is Wittgenstein's (re-)introduction of an extra-mathematical application criterion, which is used to distinguish mere “sign-games” from mathematical language-games. “[I]t is essential to mathematics that its signs are also employed in mufti,” Wittgenstein states, for “[i]t is the use outside mathematics, and so the meaning [‘Bedeutung’] of the signs, that makes the sign-game into mathematics” (i.e., a mathematical “language-game”) [(RFM V, §2, 1942), (LFM 140–141, 169–70)]. As Wittgenstein says at (RFM V, §41, 1943), “[c]oncepts which occur in ‘necessary’ propositions must also occur and have a meaning [‘Bedeutung’] in non-necessary ones” [italics added]. If two proofs prove the same proposition, says Wittgenstein, this means that “both demonstrate it as a suitable instrument for the same purpose,” which “is an allusion to something outside mathematics” (RFM VII, §10, 1941; italics added). As we have seen, this criterion was present in the Tractatus (6.211), but noticeably absent in the middle period. The reason for this absence is probably that the intermediate Wittgenstein wanted to stress that in mathematics everything is syntax and nothing is meaning. Hence, in his criticisms of Hilbert's ‘contentual’ mathematics (Hilbert 1925) and Brouwer's reliance upon intuition to determine the meaningful content of (especially undecidable) mathematical propositions, Wittgenstein couched his finitistic constructivism in strong formalism, emphasizing that a mathematical calculus does not need an extra-mathematical application (PR §109; WVC 105). There seem to be two reasons why the later Wittgenstein reintroduces extra-mathematical application as a necessary condition of a mathematical language-game. First, the later Wittgenstein has an even greater interest in the use of natural and formal languages in diverse “forms of life” (PI §23), which prompts him to emphasize that, in many cases, a mathematical ‘proposition’ functions as if it were an empirical proposition “hardened into a rule” (RFM VI, §23) and that mathematics plays diverse applied roles in many forms of human activity (e.g., science, technology, predictions). Second, the extra-mathematical application criterion relieves the tension between Wittgenstein's intermediate critique of set theory and his strong formalism according to which “one calculus is as good as another” (PG 334). By demarcating mathematical language-games from non-mathematical sign-games, Wittgenstein can now claim that, “for the time being,” set theory is merely a formal sign-game. These considerations may lead us to say that 2^ℵ[0] > ℵ[0]. That is to say: we can make the considerations lead us to that. Or: we can say this and give this as our reason. But if we do say it—what are we to do next? In what practice is this proposition anchored? It is for the time being a piece of mathematical architecture which hangs in the air, and looks as if it were, let us say, an architrave, but not supported by anything and supporting nothing. (RFM II, §35) It is not that Wittgenstein's later criticisms of set theory change, it is, rather, that once we see that set theory has no extra-mathematical application, we will focus on its calculations, proofs, and prose and “subject the interest of the calculations to a test” (RFM II, §62). By means of Wittgenstein's “immensely important” ‘investigation’ (LFM 103), we will find, Wittgenstein expects, that set theory is uninteresting (e.g., that the non-enumerability of “the reals” is uninteresting and useless) and that our entire interest in it lies in the ‘charm’ of the mistaken prose interpretation of its proofs (LFM 16). More importantly, though there is “a solid core to all [its] glistening concept-formations” (RFM V, §16), once we see it as “as a mistake of ideas,” we will see that propositions such as “2^ℵ[0] > ℵ[0]” are not anchored in an extra-mathematical practice, that “Cantor's paradise” “is not a paradise,” and we will then leave “of [our] own accord” (LFM 103). It must be emphasized, however, that the later Wittgenstein still maintains that the operations within a mathematical calculus are purely formal, syntactical operations governed by rules of syntax (i.e., the solid core of formalism). It is of course clear that the mathematician, in so far as he really is ‘playing a game’…[is] acting in accordance with certain rules. (RFM V, §1) To say mathematics is a game is supposed to mean: in proving, we need never appeal to the meaning [‘Bedeutung’] of the signs, that is to their extra-mathematical application. (RFM V, §4) Where, during the middle period, Wittgenstein speaks of “arithmetic [as] a kind of geometry” at (PR §109 & §111), the later Wittgenstein similarly speaks of “the geometry of proofs” (RFM I, App. III, §14), the “geometrical cogency” of proofs (RFM III, §43), and a “geometrical application” according to which the “transformation of signs” in accordance with “transformation-rules” (RFM VI, §2, 1941) shows that “when mathematics is divested of all content, it would remain that certain signs can be constructed from others according to certain rules” (RFM III, §38). Hence, the question whether a concatenation of signs is a proposition of a given mathematical calculus (i.e., a calculus with an extra-mathematical application) is still an internal, syntactical question, which we can answer with knowledge of the proofs and decision procedures of the calculus. RFM is perhaps most (in)famous for Wittgenstein's (RFM App. III) treatment of “true but unprovable” mathematical propositions. Early reviewers said that “[t]he arguments are wild” (Kreisel 1958, 153), that the passages “on Gödel's theorem… are of poor quality or contain definite errors” (Dummett 1959, 324), and that (RFM App. III) “throws no light on Gödel's work” (Goodstein 1957, 551). “Wittgenstein seems to want to legislate [“[q]uestions about completeness”] out of existence,” Anderson said, (1958, 486–87) when, in fact, he certainly cannot dispose of Gödel's demonstrations “by confusing truth with provability.” Additionally, Bernays, Anderson (1958, 486), and Kreisel (1958, 153–54) claimed that Wittgenstein failed to appreciate “Gödel's quite explicit premiss of the consistency of the considered formal system” (Bernays 1959, 15), thereby failing to appreciate the conditional nature of Gödel's First Incompleteness Theorem. On the reading of these four early expert reviewers, Wittgenstein failed to understand Gödel's Theorem because he failed to understand the mechanics of Gödel's proof and he erroneously thought he could refute or undermine Gödel's proof simply by identifying “true in PM” (i.e., Principia Mathematica) with “proved/provable in PM.” Interestingly, we now have two pieces of evidence [(Kreisel 1998, 119); (Rodych 2003, 282, 307)] that Wittgenstein wrote (RFM App. III) in 1937–38 after reading only the informal, ‘casual’ (MS 126, 126–127; Dec. 13, 1942) introduction of (Gödel 1931) and that, therefore, his use of a self-referential proposition as the “true but unprovable proposition” may be based on Gödel's introductory, informal statements, namely that “the undecidable proposition [R(q);q] states… that [R(q);q] is not provable” (1931, 598) and that “[R(q);q] says about itself that it is not provable” (1931, 599). Perplexingly, only two of the four famous reviewers even mentioned Wittgenstein's (RFM VII, §§19, 21–22, 1941)) explicit remarks on ‘Gödel's’ First Incompleteness Theorem [(Bernays 1959, 2), (Anderson 1958, 487)], which, though flawed, capture the number-theoretic nature of the Gödelian proposition and the functioning of Gödel-numbering, probably because Wittgenstein had by then read or skimmed the body of Gödel's 1931 paper (Rodych 2003, 304–07). The first thing to note, therefore, about (RFM App. III) is that Wittgenstein mistakenly thinks—again, perhaps because Wittgenstein had read only Gödel's Introduction—(a) that Gödel proves that there are true but unprovable propositions of PM (when, in fact, Gödel syntactically proves that if PM is ω-consistent, the Gödelian proposition is undecidable in PM) and (b) that Gödel's proof uses a self-referential proposition to semantically show that there are true but unprovable propositions of PM. For this reason, Wittgenstein has two main aims in (RFM App. III): (1) to refute or undermine, on its own terms, the alleged Gödel proof of true but unprovable propositions of PM, and (2) to show that, on his own terms, where “true in calculus Γ” is identified with “proved in calculus Γ,” the very idea of a true but unprovable proposition of calculus Γ is meaningless. Thus, at (RFM App. III, §8) (hereafter simply ‘§8’), Wittgenstein begins his presentation of what he takes to be Gödel's proof by having someone say: “I have constructed a proposition (I will use ‘P’ to designate it) in Russell's symbolism, and by means of certain definitions and transformations it can be so interpreted that it says: ‘P is not provable in Russell's system’.” That is, Wittgenstein's Gödelian constructs a proposition that is semantically self-referential and which specifically says of itself that it is not provable in PM. With this erroneous, self-referential proposition P [used also at (§10), (§11), (§17), (§18)], Wittgenstein presents a proof-sketch very similar to Gödel's own informal semantic proof ‘sketch’ in the Introduction of his famous paper (1931, 598). Must I not say that this proposition on the one hand is true, and on the other hand is unprovable? For suppose it were false; then it is true that it is provable. And that surely cannot be! And if it is proved, then it is proved that it is not provable. Thus it can only be true, but unprovable. (§8) The reasoning here is a double reductio. Assume (a) that P must either be true or false in Russell's system, and (b) that P must either be provable or unprovable in Russell's system. If (a), P must be true, for if we suppose that P is false, since P says of itself that it is unprovable, “it is true that it is provable,” and if it is provable, it must be true (which is a contradiction), and hence, given what P means or says, it is true that P is unprovable (which is a contradiction). Second, if (b), P must be unprovable, for if P “is proved, then it is proved that it is not provable,” which is a contradiction (i.e., P is provable and not provable in PM). It follows that P “can only be true, but unprovable.” To refute or undermine this ‘proof,’ Wittgenstein says that if you have proved ¬P, you have proved that P is provable (i.e., since you have proved that it is not the case that P is not provable in Russell's system), and “you will now presumably give up the interpretation that it is unprovable” (i.e., ‘P is not provable in Russell's system’), since the contradiction is only proved if we use or retain this self-referential interpretation (§8). On the other hand, Wittgenstein argues (§8), ‘[i]f you assume that the proposition is provable in Russell's system, that means it is true in the Russell sense, and the interpretation “P is not provable” again has to be given up,’ because, once again, it is only the self-referential interpretation that engenders a contradiction. Thus, Wittgenstein's ‘refutation’ of “Gödel's proof” consists in showing that no contradiction arises if we do not interpret ‘P’ as ‘P is not provable in Russell's system’—indeed, without this interpretation, a proof of P does not yield a proof of ¬P and a proof of ¬P does not yield a proof of P. In other words, the mistake in the proof is the mistaken assumption that a mathematical proposition ‘P’ “can be so interpreted that it says: ‘P is not provable in Russell's system’.” As Wittgenstein says at (§11), “[t]hat is what comes of making up such sentences.” This ‘refutation’ of “Gödel's proof” is perfectly consistent with Wittgenstein's syntactical conception of mathematics (i.e., wherein mathematical propositions have no meaning and hence cannot have the ‘requisite’ self-referential meaning) and with what he says before and after (§8), where his main aim is to show (2) that, on his own terms, since “true in calculus Γ” is identical with “proved in calculus Γ,” the very idea of a true but unprovable proposition of calculus Γ is a contradiction-in-terms. To show (2), Wittgenstein begins by asking (§5), what he takes to be, the central question, namely, “Are there true propositions in Russell's system, which cannot be proved in his system?”. To address this question, he asks “What is called a true proposition in Russell's system…?,” which he succinctly answers (§6): “‘p’ is true = p.” Wittgenstein then clarifies this answer by reformulating the second question of (§5) as “Under what circumstances is a proposition asserted in Russell's game [i.e., system]?”, which he then answers by saying: “the answer is: at the end of one of his proofs, or as a ‘fundamental law’ (Pp.)” (§6). This, in a nutshell, is Wittgenstein's conception of “mathematical truth”: a true proposition of PM is an axiom or a proved proposition, which means that “true in PM” is identical with, and therefore can be supplanted by, “proved in PM.” Having explicated, to his satisfaction at least, the only real, non-illusory notion of “true in PM,” Wittgenstein answers the (§8) question “Must I not say that this proposition… is true, and… unprovable?” negatively by (re)stating his own (§§5–6) conception of “true in PM” as “proved/provable in PM”: “‘True in Russell's system’ means, as was said: proved in Russell's system; and ‘false in Russell's system’ means: the opposite has been proved in Russell's system.” This answer is given in a slightly different way at (§7) where Wittgenstein asks “may there not be true propositions which are written in this [Russell's] symbolism, but are not provable in Russell's system?”, and then answers “‘True propositions’, hence propositions which are true in another system, i.e. can rightly be asserted in another game.” In light of what he says in (§§5, 6, and 8), Wittgenstein's (§7) point is that if a proposition is ‘written’ in “Russell's symbolism” and it is true, it must be proved/ provable in another system, since that is what “mathematical truth” is. Analogously (§8), “if the proposition is supposed to be false in some other than the Russell sense, then it does not contradict this for it to be proved in Russell's sense,” for ‘[w]hat is called “losing” in chess may constitute winning in another game.’ This textual evidence certainly suggests, as Anderson almost said, that Wittgenstein rejects a true but unprovable mathematical proposition as a contradiction-in-terms on the grounds that “true in calculus Γ” means nothing more (and nothing less) than “proved in calculus On this (natural) interpretation of (RFM App. III), the early reviewers’ conclusion that Wittgenstein fails to understand the mechanics of Gödel's argument seems reasonable. First, Wittgenstein erroneously thinks that Gödel's proof is essentially semantical and that it uses and requires a self-referential proposition. Second, Wittgenstein says (§14) that “[a] contradiction is unusable” for “a prediction” that “that such-and-such construction is impossible” (i.e., that P is unprovable in PM), which, superficially at least (Rodych 1999a, 190–91), seems to indicate that Wittgenstein fails to appreciate the “consistency assumption” of Gödel's proof (Kreisel, Bernays, Anderson). If, in fact, Wittgenstein did not read and/or failed to understand Gödel's proof through at least 1941, how would he have responded if and when he understood it as (at least) a proof of the undecidability of P in PM on the assumption of PM's consistency? Given his syntactical conception of mathematics, even with the extra-mathematical application criterion, he would simply say that P, qua expression syntactically independent of PM, is not a proposition of PM, and if it is syntactically independent of all existent mathematical language-games, it is not a mathematical proposition. Moreover, there seem to be no compelling non-semantical reasons—either intra-systemic or extra-mathematical—for Wittgenstein to accommodate P by including it in PM or by adopting a non-syntactical conception of mathematical truth (such as Tarski-truth (Steiner 2000)). Indeed, Wittgenstein questions the intra-systemic and extra-mathematical usability of P in various discussions of Gödel in the Nachlass (Rodych 2002, 2003) and, at (§19), he emphatically says that one cannot “make the truth of the assertion [‘P’ or “Therefore P”] plausible to me, since you can make no use of it except to do these bits of legerdemain.” After the initial, scathing reviews of RFM, very little attention was paid to Wittgenstein's (RFM App. III) and (RFM VII, §§21–22) discussions of Gödel's First Incompleteness Theorem (Klenk 1976, 13) until Shanker's sympathetic (1988b). In the last 11 years, however, commentators and critics have offered various interpretations of Wittgenstein's remarks on Gödel, some being largely sympathetic (Floyd 1995, 2001) and others offering a more mixed appraisal [(Rodych 1999a, 2002, 2003), (Steiner 2001), (Priest 2004), (Berto 2009a)]. Recently, and perhaps most interestingly, (Floyd & Putnam 2000) and (Steiner 2001) have evoked new and interesting discussions of Wittgenstein's ruminations on undecidability, mathematical truth, and Gödel's First Incompleteness Theorem [(Rodych 2003, 2006), (Bays 2004), (Sayward 2005), and (Floyd & Putnam 2006)]. Though it is doubtful that all commentators will agree [(Wrigley 1977, 51), (Baker and Hacker 1985, 345), (Floyd 1991, 145, 143; 1995, 376; 2005, 80), (Maddy 1993, 55), (Steiner 1996, 202–204)], the following passage seems to capture Wittgenstein's attitude to the Philosophy of Mathematics and, in large part, the way in which he viewed his own work on mathematics. What will distinguish the mathematicians of the future from those of today will really be a greater sensitivity, and that will—as it were—prune mathematics; since people will then be more intent on absolute clarity than on the discovery of new games. Philosophical clarity will have the same effect on the growth of mathematics as sunlight has on the growth of potato shoots. (In a dark cellar they grow yards long.) A mathematician is bound to be horrified by my mathematical comments, since he has always been trained to avoid indulging in thoughts and doubts of the kind I develop. He has learned to regard them as something contemptible and… he has acquired a revulsion from them as infantile. That is to say, I trot out all the problems that a child learning arithmetic, etc., finds difficult, the problems that education represses without solving. I say to those repressed doubts: you are quite correct, go on asking, demand clarification! (PG 381, 1932) In his middle and later periods, Wittgenstein believes he is providing philosophical clarity on aspects and parts of mathematics, on mathematical conceptions, and on philosophical conceptions of mathematics. Lacking such clarity and not aiming for absolute clarity, mathematicians construct new games, sometimes because of a misconception of the meaning of their mathematical propositions and mathematical terms. Education and especially advanced education in mathematics does not encourage clarity but rather represses it—questions that deserve answers are either not asked or are dismissed. Mathematicians of the future, however, will be more sensitive and this will (repeatedly) prune mathematical extensions and inventions, since mathematicians will come to recognize that new extensions and creations (e.g., propositions of transfinite cardinal arithmetic) are not well-connected with the solid core of mathematics or with real-world applications. Philosophical clarity will, eventually, enable mathematicians and philosophers to “get down to brass tacks” (PG 467). • Wittgenstein, Ludwig, 1913, “On Logic and How Not to Do It,” The Cambridge Review 34 (1912–13), 351; reprinted in Brian McGuinness, Wittgenstein: A Life, Berkeley & Los Angeles, University of California Press: 169–170. • Wittgenstein, Ludwig, 1922, Tractatus Logico-Philosophicus, London: Routledge and Kegan Paul, 1961; translated by D.F. Pears and B.F. McGuinness. • Wittgenstein, Ludwig, 1929, “Some Remarks on Logical Form,” Proceedings of the Aristotelian Society, Supplementary Vol. 9: 162–171. PI Wittgenstein, Ludwig, 1953 [2001], Philosophical Investigations, 3^rd Edition, Oxford: Blackwell Publishing; translated by G. E. M. Anscombe. RFM Wittgenstein, Ludwig, 1956 [1978], Remarks on the Foundations of Mathematics, Revised Edition, Oxford: Basil Blackwell, G.H. von Wright, R. Rhees and G.E.M. Anscombe (eds.); translated by G.E.M • Wittgenstein, Ludwig, 1966 [1999], Lectures & Conversations on Aesthetics, Psychology and Religious Belief, Cyril Barrett, (ed.), Oxford: Blackwell Publishers Ltd. • Wittgenstein, Ludwig, 1967, Zettel, Berkeley: University of California Press; G.E.M Anscombe and G.H. von Wright (Eds.); translated by G.E.M. Anscombe. PG Wittgenstein, Ludwig, 1974, Philosophical Grammar, Oxford: Basil Blackwell; Rush Rhees, (ed.); translated by Anthony Kenny. PR Wittgenstein, Ludwig, 1975, Philosophical Remarks, Oxford: Basil Blackwell; Rush Rhees, (ed.); translated by Raymond Hargreaves and Roger White. • Wittgenstein, Ludwig, 1979a, Notebooks 1914–1916, Second Edition, G.H. von Wright and G. E. M. Anscombe (eds.), Oxford: Basil Blackwell. • Wittgenstein, Ludwig, 1979b, “Notes on Logic” (1913), in Notebooks 1914–1916, G.H. von Wright and G.E.M. Anscombe (eds.), Oxford: Basil Blackwell. • Wittgenstein, Ludwig, 1980, Remarks on the Philosophy of Psychology, Vol. I, Chicago: University of Chicago Press, G.E.M. Anscombe and G.H. von Wright, (eds.), translated by G.E.M. Anscombe. • Wittgenstein, Ludwig, 2000, Wittgenstein's Nachlass: The Bergen Electronic Edition, Oxford: Oxford University Press. AWL Ambrose, Alice, (ed.), 1979, Wittgenstein's Lectures, Cambridge 1932–35: From the Notes of Alice Ambrose and Margaret Macdonald, Oxford: Basil Blackwell. LFM Diamond, Cora, (ed.), 1976, Wittgenstein's Lectures on the Foundations of Mathematics, Ithaca, N.Y.: Cornell University Press. LWL Lee, Desmond, (ed.), 1980, Wittgenstein's Lectures, Cambridge 1930–32: From the Notes of John King and Desmond Lee, Oxford: Basil Blackwell. WVC Waismann, Friedrich, 1979, Wittgenstein and the Vienna Circle, Oxford: Basil Blackwell; edited by B.F. McGuinness; translated by Joachim Schulte and B.F. McGuinness. • Ambrose, Alice, 1935a, “Finitism in Mathematics (I),” Mind, 44 (174): 186–203. • –––, 1935b, “Finitism in Mathematics (II),” Mind, 44 (175): 317–340. • –––, 1972, “Mathematical Generality,” in Ludwig Wittgenstein: Philosophy and Language, Ambrose and Lazerowitz (eds.), London: George Allen and Unwin Ltd.: 287–318. • –––, 1982, “Wittgenstein on Mathematical Proof,” Mind, 91 (362): 264–372. • Ambrose, Alice, and Lazerowitz, Morris, (eds.), 1972, Ludwig Wittgenstein: Philosophy and Language, London: George Allen and Unwin Ltd. • Anderson, A.R., 1958, ‘Mathematics and the “Language Game”,’ The Review of Metaphysics, II: 446–458; reprinted in Philosophical of Mathematics, Benacerraf and Putnam, (eds.), Englewood Cliffs, N.J.: Prentice-Hall, Inc., 1964: 481–490. • Baker, Gordon, and Hacker, P. M. S., 1985, Wittgenstein: Rules, Grammar and Necessity, Volume 2 of an Analytical Commentary on the Philosophical Investigations, Oxford: Blackwell. • Bays, Timothy, 2004, “On Floyd and Putnam on Wittgenstein on Gödel,” Journal of Philosophy, 101 (4), April: 197–210. • Benacerraf, Paul, and Putnam, Hilary, (eds.), 1964, Philosophical of Mathematics, Englewood Cliffs, N.J.: Prentice-Hall, Inc. • Benacerraf, Paul, and Putnam, Hilary, (eds.), 1983, Philosophy of Mathematics, 2nd ed., Cambridge: Cambridge University Press. • Bernays, Paul, 1959, “Comments on Ludwig Wittgenstein's Remarks on the Foundations of Mathematics,” Ratio, 2 (1): 1–22. • Berto, Fransesco, 2009a, “The Gödel Paradox and Wittgenstein's Reasons,” Philosophia Mathematica, 17 (2): 208–219. • –––, 2009b, There's Something About Gödel, West Sussex, U.K.: Wiley-Blackwell, Chapter 12: “Gödel vs. Wittgenstein and the Paraconsistent Interpretation”: 189–213. • Black, Max, 1965, “Verificationism and Wittgenstein's Reflections on Mathematics,” Revue Internationale de Philosophie, Vol. 23: 284–94; reprinted in Ludwig Wittgenstein: Critical Assessments, Vol. III, Shanker (ed.), London: Croom Helm: 68–76. • Brouwer, L.E.J., 1907, On the Foundations of Mathematics [Doctoral Thesis] (Amsterdam: 1907, 183p.); reprinted in Heyting, (ed.), L.E.J. Brouwer: Collected Works, Vol. I: 11–101. • –––, 1908, “The Unreliability of the Logical Principles,” in A. Heyting, (ed.), 1975, L.E.J. Brouwer: Collected Works: Philosophy and Foundations of Mathematics, Vol. I, Amsterdam: North Holland Publishing Company: 107–111. • –––, 1929, “Mathematik, Wissenschaft und Sprache,” Monatshefte für Mathematik und Physik, 36 (1): 153–64; reprinted as “Mathematics, Science, and Language” in From Brouwer to Hilbert, Paolo Mancosu (ed.), Oxford: Oxford University Press: 45–53. • –––, 1948, “Consciousness, Philosophy and Mathematics,” in Philosophy of Mathematics, 2nd ed., Benacerraf and Putnam (eds.), Cambridge: Cambridge University Press: 90–96. • –––, 1955, “The Effect of Intuitionism on Classical Algebra of Logic,” Proceedings of the Royal Irish Academy, 57: 113–116. • –––, 1981, Brouwer's Cambridge Lectures on Intuitionism, Cambridge: Cambridge University Press. • Clark, Peter, and Hale, Bob, (eds.), 1994, Reading Putnam, Blackwell Publishers, Cambridge, Mass. • Coliva, A. & Picardi, E. (eds.), 2004, Wittgenstein Today, Padova, Il Poligrafo. • Conant, James, 1997, “On Wittgenstein's Philosophy of Mathematics,” Proceedings of the Aristotelian Society, 97 (2): 195–222. • Crary, Alice, and Read, Rupert, (eds.), 2000, The New Wittgenstein, London and New York: Routledge. • Da Silva, Jairo Jose, 1993, “Wittgenstein on Irrational Numbers,” in Wittgenstein's Philosophy of Mathematics, K. Puhl, (ed.), Vienna: Verlag Hölder-Pichler-Tempsky: 93–99. • Dreben, Burton, and Floyd, Juliet, “Tautology: How Not To Use A Word,” Synthese, 87: 23–49, 1991. • Dummett, Michael, 1959, “Wittgenstein's Philosophy of Mathematics,” The Philosophical Review, 68: 324–348. • –––, 1978, “Reckonings: Wittgenstein on Mathematics,” Encounter, 50 (3): 63–68; reprinted in Ludwig Wittgenstein: Critical Assessments, Vol. III, S. Shanker (ed.), London: Croom Helm: 111–120. • –––, 1994, “Wittgenstein on Necessity: Some Reflections,” in Reading Putnam, Clark and Hale (eds.), Blackwell Publishers, Cambridge, Mass.: 49–65. • Finch, Henry Le Roy, 1977, Wittgenstein–The Later Philosophy, Atlantic Highlands, N.J.: Humanities Press. • Floyd, Juliet, 1991, “Wittgenstein on 2, 2, 2…: The Opening of Remarks on the Foundations of Mathematics,” Synthese, 87: 143–180. • –––, 1995, ‘On Saying What You Really Want to Say: Wittgenstein, Gödel, and the Trisection of the Angle,’ in From Dedekind to Gödel: Essays on the Development of Mathematics, J. Hintikka (ed.), Dordrecht: Kluwer Academic Publishers, 373–425. • –––, 2000b, “Wittgenstein, Mathematics and Philosophy,” in The New Wittgenstein, Crary and Read (eds.), London and New York: Routledge: 232–261. • –––, 2001, “Prose versus Proof: Wittgenstein on Gödel, Tarski, and Truth,” Philosophia Mathematica, 9 (3): 280–307. • –––, 2002, “Number and Ascriptions of Number in Wittgenstein's Tractatus,” in Perspectives on Early Analytic Philosophy: Frege, Russell, Wittgenstein, E. Reck (ed.), New York: Oxford University Press: 308–352. • –––, 2005, “Wittgenstein on Philosophy of Logic and Mathematics,” in The Oxford Handbook of Philosophy of Logic and Mathematics, S. Shapiro (ed.), Oxford: Oxford University Press: 75–128. • Floyd, Juliet, and Dreben, Burton, 1991, “Tautology: How Not to Use a Word,” Synthese, 87: 23–49. • Floyd, Juliet, and Putnam, Hilary, 2000a, “A Note on Wittgenstein's “Notorious Paragraph” about the Gödel Theorem,” The Journal of Philosophy, 97 (11): 624–632. • –––, 2006, “Bays, Steiner, and Wittgenstein's ‘Notorious’ Paragraph about the Gödel Theorem,” The Journal of Philosophy, 103 (2): 101–110. • Fogelin, Robert J., 1968, “Wittgenstein and Intuitionism,” American Philosophical Quarterly, 5: 267–274. • –––, 1987 [1976], Wittgenstein, Second Edition, New York: Routledge & Kegan Paul. • Frascolla, Pasquale, 1980, “The Constructivist Model in Wittgenstein's Philosophy of Mathematics,” Revista Filosofia, 71: 297–306; reprinted in Ludwig Wittgenstein: Critical Assessments, Vol. III, S. Shanker (ed.), London: Croom Helm: 242–249. • –––, 1994, Wittgenstein's Philosophy of Mathematics, London and New York: Routledge. • –––, 1997, “The Tractatus System of Arithmetic,” Synthese, 112: 353–378. • –––, 1998, “The Early Wittgenstein's Logicism,” Acta Analytica, 21: 133–137. • –––, 2004, “Wittgenstein on Mathematical Proof,” in Wittgenstein Today, A. Coliva & E. Picardi, (eds.), Padova, Il Poligrafo: 167–184. • Frege, Gottlob, 1959 [1884], The Foundations of Arithmetic, translated by J.L. Austin, Oxford: Basil Blackwell. • Garavaso, Pieranna, 1988, “Wittgenstein's Philosophy of Mathematics: A Reply to Two Objections,” Southern Journal of Philosophy, 26 (2): 179–191. • Gerrard, Steve, 1991, “Wittgenstein's Philosophies of Mathematics,” Synthese, 87: 125–142. • –––, 1996, “A Philosophy of Mathematics Between Two Camps,” in The Cambridge Companion to Wittgenstein, Sluga and Stern, (eds.), Cambridge: Cambridge University Press: 171–197. • –––, 2002, “One Wittgenstein?”, in From Frege to Wittgenstein: Perspectives on Early Analytic Philosophy, E. Reck, (ed.), New York: Oxford University Press: 52–71. • Glock, Hans-Johann, 1996, “Necessity and Normativity,” in The Cambridge Companion to Wittgenstein, Sluga and Stern, (eds.), Cambridge: Cambridge University Press: 198–225. • Gödel, Kurt, 1931, “On Formally Undecidable Propositions of Principia Mathematica and Related Systems I,” in From Frege to Gödel, van Heijenoort (ed.), Cambridge, Mass.: Harvard University Press: • –––, 1953–1957, “Is Mathematics Syntax of Language?”, Version III, in Gödel, Collected Works, Vol. III, S. Feferman, J. Dawson, Jr., W. Goldfarb, C. Parsons, and R. Solovay (eds.), Oxford: Oxford University Press: 334–356. • –––, 1995, Collected Works, Vol. III, S. Feferman, J. Dawson, Jr., W. Goldfarb, C. Parsons, and R. Solovay (eds.), Oxford: Oxford University Press. • Goldstein, Laurence, 1986, “The Development of Wittgenstein's Views on Contradiction,” History and Philosophy of Logic, 7: 43–56. • –––, 1989, “Wittgenstein and Paraconsistency,” in G. Priest, R. Routley and J. Norman, (eds.), Paraconsistent Logic: Essays on the Inconsistent, Munich : Philosophia Verlag: 540–562. • Goodstein, R. L., 1957, “Critical Notice of Remarks on the Foundations of Mathematics,” Mind, 66: 549–553. • –––, 1972, “Wittgenstein's Philosophy of Mathematics,” in Ludwig Wittgenstein: Philosophy and Language, Ambrose and Lazerowitz, (eds.), London: George Allen and Unwin Ltd.: 271–86. • Hacker, P.M.S., 1986, Insight & Illusion. Themes in the Philosophy of Wittgenstein, revised edition, Clarendon Press, Oxford. • Han, Daesuk, 2010, “Wittgenstein and the Real Numbers,” History and Philosophy of Logic, 31 (3): 219–245. • van Heijenoort, Jean, (ed.), 1967, From Frege to Gödel: A Sourcebook in Mathematical Logic, Cambridge, Mass.: Harvard University Press. • Heyting, A., (ed.), 1975, L.E.J. Brouwer: Collected Works: Philosophy and Foundations of Mathematics, Vol. I, Amsterdam: North Holland Publishing Company. • Hilbert, David, 1925, “On the Infinite,” in From Frege to Gödel, van Heijenoort (ed.), Cambridge, Mass.: Harvard University Press: 369–392. • Hintikka, Jaakko, 1993, “The Original Sinn of Wittgenstein's Philosophy of Mathematics,” in Wittgenstein's Philosophy of Mathematics, K. Puhl, (ed.), Vienna: Verlag Hölder-Pichler-Tempsky: 24-51. • –––, (ed.), 1995, From Dedekind to Gödel: Essays on the Development of Mathematics, Dordrecht: Kluwer Academic Publishers. • Hintikka, Jaakko, and Puhl, K. (eds.), 1994, The British Tradition in 20^th Century Philosophy: Papers of the 17^th International Wittgenstein Symposium 1994, Kirchberg-am-Wechsel: Austrian Ludwig Wittgenstein Society. • Kielkopf, Charles F., 1970, Strict Finitism, The Hague: Mouton. • Klenk, V.H., 1976, Wittgenstein's Philosophy of Mathematics, The Hague: Martinus Nijhoff. • Kölbel, Max, and Weiss, Bernhard (Eds.), 2004, Wittgenstein's Lasting Significance, London: Routledge. • Kreisel, Georg, 1958, “Wittgenstein's Remarks on the Foundations of Mathematics,” British Journal for the Philosophy of Science, 9 (34): 135–57. • –––, 1998, “Second Thoughts Around Some of Gödel's Writings: A Non-Academic Option,” Synthese, 114: 99–160. • Kremer, Michael, 2002, “Mathematics and Meaning in the Tractatus,” Philosophical Investigations, 25 (3): 272–303. • Kripke, Saul A., 1982, Wittgenstein on Rules and Private Language, Cambridge, Mass.: Harvard University Press. • Lampert, Timm, 2008, “Wittgenstein on the Infinity of Primes,” History and Philosophy of Logic, 29 (3): 272–303. • –––, 2009, “Wittgenstein on Pseudo-Irrationals, Diagonal-Numbers and Decidability,” in Logica Yearbook 2008, M. Peliš, (ed.), London: College Publications: 95–110. • Maddy, Penelope, 1986, “Mathematical Alchemy,” British Journal for the Philosophy of Science, 37: 279–312. • –––, 1993, “Wittgenstein's Anti-Philosophy of Mathematics,” in Wittgenstein's Philosophy of Mathematics, K. Puhl, (ed.), Vienna: Verlag Hölder-Pichler-Tempsky: 52-72. • Mancosu, Paolo, 1998, From Brouwer to Hilbert: The Debate on the Foundations of Mathematics in the 1920s, Oxford: Oxford University Press. • Mancosu, Paolo & Marion, Mathieu, 2002, “Wittgenstein's Constructivization of Euler's Proof of the Infinity of Primes,” in Vienna Circle Institute Yearbook, Dordrecht: Kluwer: 171–188. • Marconi, Diego, 1984, “Wittgenstein on Contradiction and the Philosophy of Paraconsistent Logic,” History of Philosophy Quarterly, 1 (3): 333–352. • Marion, Mathieu, 1993, “Wittgenstein and the Dark Cellar of Platonism,” in Wittgenstein's Philosophy of Mathematics, K. Puhl, (ed.), Vienna: Verlag Hölder-Pichler-Tempsky: 110-118. • –––, 1995a, “Wittgenstein and Finitism,” Synthese, 105: 143–65. • –––, 1995b, “Wittgenstein and Ramsey on Identity,” in From Dedekind to Gödel, Jaakko Hintikka (ed.), Dordrecht: Kluwer Academic Publishers: 343–371. • –––, 1995d, ‘Kronecker's “Safe Haven of Real Mathematics”,’ in Marion & Cohen, Quebec Studies in the Philosophy of Science, Part I, Dordrecht: Kluwer: 135–87. • –––, 1998, Wittgenstein, Finitism, and the Foundations of Mathematics, Oxford: Clarendon Press. • –––, 2003, “Wittgenstein and Brouwer,” Synthese, 137: 103–127. • –––, 2004, “Wittgenstein on Mathematics: Constructivism or Constructivity?”, in Wittgenstein Today, Coliva & Picardi, (eds.), Padova, Il Poligrafo: 201–222. • –––, 2008, “Brouwer on 'Hypotheses' and the Middle Wittgenstein”, in One Hundred Years of Intuitionism, M. van Atten, P. Boldini, M. Bourdreau, G. Heinzmann, (eds.), Basel, Birkhauser: 96–114. • –––, 2009, “Radical Anti-Realism, Wittgenstein and the Length of Proofs,” Synthese, 171 (3): 419–432. • Marion, Mathieu, and Cohen, R. S., (eds.), 1995c, Quebec Studies in the Philosophy of Science, Part I, Dordrecht: Kluwer. • Marion, Mathieu & Mancosu, Paolo, 2002, “Wittgenstein's Constructivization of Euler's Proof of the Infinity of Primes,” in Stadler, (ed.) Vienna Circle Institute Yearbook, Dordrecht: Kluwer: • McCarthy, Timothy, and Stidd, Sean, (eds.), 2001, Wittgenstein in America, Oxford: Clarendon Press. • McGuinness, Brian, 1988, Wittgenstein: A Life—Young Ludwig 1889–1921, Berkeley & Los Angeles, University of California Press. • McGuinness, Brian and von Wright, G.H., (eds.), 1995, Ludwig Wittgenstein: Cambridge Letters: Correspondence with Russell, Keynes, Moore, Ramsey, and Sraffa, Oxford: Blackwell Publishers Ltd. • Monk, Ray, 1990, Ludwig Wittgenstein: The Duty of Genius, New York: The Free Press. • Moore, A. W., 2003, “On the Right Track,” Mind, 112 (446): 307–321. • Moore, G. E., 1955, “Wittgenstein's Lectures in 1930–33,” Mind, 64 (253): 1–27. • Morton, Adam, and Stich, Stephen P., 1996, Benacerraf and His Critics, Oxford: Blackwell. • Poincaré, Henri, 1913a [1963], Mathematics and Science: Last Essays, New York: Dover Pub., Inc., John W. Bolduc (trans.). • –––, 1913b, “The Logic of Infinity,” in Mathematics and Science: Last Essays: 45–64. • Potter, Michael, 2000, Reason's Nearest Kin, Oxford: Oxford University Press. • Priest, Graham, 2004, “Wittgenstein's Remarks on Gödel's Theorem,” in Max Kölbel and Bernhard Weiss (eds.), Wittgenstein's Lasting Significance, London: Routledge. • Priest, Graham, Routley, Richard, and Norman, Jean, (eds.), 1989, Paraconsistent Logic: Essays on the Inconsistent, Munich: Philosophia Verlag. • Puhl, Klaus, (ed.), 1993, Wittgenstein's Philosophy of Mathematics, Verlag Holder-Pichler-Tempsky, Vienna. • Putnam, Hilary, 1994, Words and Life (James Conant (ed.)), Cambridge, Mass.: Harvard University Press. • –––, 1994, “Rethinking Mathematical Necessity,” in Hilary Putnam, Words and Life (James Conant, (ed.)) (Cambridge, Mass.: Harvard University Press, 1994): 245–263; reprinted in Crary and Read [2000]: 218–231. • –––, 1996, “On Wittgenstein's Philosophy of Mathematics,” Proceedings of the Aristotelian Society (Supplement), 70: 243–264. • –––, 2001, “Was Wittgenstein Really an Anti-realist about Mathematics?”, in Wittgenstein in America, T. McCarthy and S. Stidd, (eds.), Oxford: Clarendon Press: 140–194. • –––, 2007, “Wittgenstein and the Real Numbers,” in Wittgenstein and the Moral Life: Essays in Honor of Cora Diamond, Alice Crary, (ed.), Cambridge, Mass.: The MIT Press: 235–250. • Putnam, Hilary, and Juliet Floyd, 2000, “A Note on Wittgenstein's “Notorious Paragraph” about the Gödel Theorem,” The Journal of Philosophy, 97 (11): 624–632. • Quine, W.V.O., 1940, Mathematical Logic, Cambridge, Mass.: Harvard University Press, 1981. • Ramharter, Esther, 2009, “Review of Redecker's Wittgensteins Philosophie der Mathematik,” Philosophia Mathematica, 17 (3): 382–392. • Ramsey, Frank Plumpton, 1923, “Review of ‘Tractatus’,” Mind, 32 (128): 465–478. • –––, 1925, “The Foundations of Mathematics.” in Philosophical Papers, D.H. Mellor, (ed.), Cambridge: Cambridge University Press: 164–224. • –––, 1929, “The Formal Structure of Intuitionist Mathematics,” in Notes on Philosophy, Probability and Mathematics, M. Galavotti (ed.), Napoli: Bibliopolis: 203–220. • –––, 1990, Philosophical Papers, D.H. Mellor, (ed.), Cambridge: Cambridge University Press. • –––, 1991, Notes on Philosophy, Probability and Mathematics, M. Galavotti (ed.), Napoli: Bibliopolis. • Reck, E., (ed.), 2002, Perspectives on Early Analytic Philosophy: Frege, Russell, Wittgenstein, New York: Oxford University Press. • Redecker, Christine, 2006, Wittgensteins Philosophie der Mathematik: Eine Neubewertung im Ausgang von der Kritik an Cantors Beweis der Überabzählbarkeit der reeleen Zahlen [Wittgenstein's Philosophy of Mathematics: A Reassessment Starting From the Critique of Cantor's Proof of the Uncountability of the Real Numbers], Frankfurt-Hausenstamm: Ontos Verlag. • Rodych, Victor, 1995, “Pasquale Frascolla's Wittgenstein's Philosophy of Mathematics,” Philosophia Mathematica, 3 (3): 271–288. • –––, 1997, “Wittgenstein on Mathematical Meaningfulness, Decidability, and Application,” Notre Dame Journal of Formal Logic, 38 (2): 195–224. • –––, 1999a, “Wittgenstein's Inversion of Gödel's Theorem,” Erkenntnis, 51 (2–3): 173–206. • –––, 1999b, “Wittgenstein on Irrationals and Algorithmic Decidability,’ Synthese, 118 (2): 279–304. • –––, 2000a, ‘Wittgenstein's Critique of Set Theory,” The Southern Journal of Philosophy, 38 (2): 281–319. • –––, 2000b, “Wittgenstein's Anti-Modal Finitism,” Logique et Analyse, 43 (171–172): 301–333. • –––, 2001, “Gödel's ‘Disproof’ of the Syntactical Viewpoint,” The Southern Journal of Philosophy, 39 (4): 527–555. • –––, 2002, “Wittgenstein on Gödel: The Newly Published Remarks,” Erkenntnis, 56 (3): 379–397. • –––, 2003, “Misunderstanding Gödel: New Arguments about Wittgenstein and New Remarks by Wittgenstein,” Dialectica, 57 (3): 279–313. • –––, 2006, “Who Is Wittgenstein's Worst Enemy?: Steiner on Wittgenstein on Gödel,” Logique et Analyse, 49 (193): 55–84. • –––, 2008, “Mathematical Sense: Wittgenstein's Syntactical Structuralism,” in Wittgenstein and the Philosophy of Information, A. Pichler and H. Hrachovec, (eds.), Frankfurt: Ontos Verlag, Proceedings of the 30th International Wittgenstein Symposium, Vol. 1: 81–103. • Russell, Bertrand, 1903, The Principles of Mathematics, London: Routledge, 1992; second edition, with a new Introduction, 1937. • –––, 1914, Our Knowledge of the External World, LaSalle, Ill.: Open Court Publishing Company. • –––, 1918, “The Philosophy of Logical Atomism,” The Monist, 5 (29): 32–63; 190–222; 345–380; reprinted in R.C. Marsh, (ed.), Logic and Knowledge: London: Routledge, 1956): 177–281. • –––, 1919, Introduction to Mathematical Philosophy, London: Routledge, (1993 edition with a new Introduction by John Slater). • Savitt, Steven, 1979, “Wittgenstein's Early Philosophy of Mathematics,” Philosophy Research Archives Vol. 5; reprinted in Ludwig Wittgenstein: Critical Assessments, Vol. III, Shanker (ed.), London: Croom Helm: 26–35. • Sayward, Charles, 2001, “On Some Much Maligned Remarks of Wittgenstein on Gödel,” Philosophical Investigations 24:3: 262–270. • –––, 2005, “Steiner versus Wittgenstein: Remarks on Differing Views of Mathematical Truth,” Theoria, 20 (54): 347–352. • Shanker, Stuart, (ed.), 1986, Ludwig Wittgenstein: Critical Assessments, Vol. III, London: Croom Helm. • –––, 1987, Wittgenstein and the Turning Point in the Philosophy of Mathematics, London: Croom Helm. • –––, (ed.), 1988a, Gödel's Theorem in Focus, London: Routledge. • –––, 1988b, “Wittgenstein's Remarks on the Significance of Gödel's Theorem,” in Gödel's Theorem in Focus, Shanker, (ed.): 155–256. • Shapiro, Stewart (ed.), 2005, The Oxford Handbook of Philosophy of Logic and Mathematics, Oxford: Oxford University Press. • Skolem, Thoralf, 1923, “The Foundations of Elementary Arithmetic Established by means of the Recursive Mode of Thought, Without the use of Apparent Variables Ranging Over Infinite Domains,” in From Frege to Gödel, van Heijenoort (ed.), Cambridge, Mass.: Harvard University Press: 303–333. • Sluga, Hans, and Stern, David G., (eds.), 1996, The Cambridge Companion to Wittgenstein, Cambridge: Cambridge University Press. • Stadler, Friedrich, (ed.), 2002, Vienna Circle Institute Yearbook, Dordrecht: Kluwer. • Steiner, Mark, 1975, Mathematical Knowledge, Ithaca, N.Y.: Cornell University Press. • –––, 1996, “Wittgenstein: Mathematics, Regularities, Rules,” in Benacerraf and His Critics, Morton and Stich (eds.), Oxford: Blackwell: 190–212. • –––, 2000, “Mathematical Intuition and Physical Intuition in Wittgenstein's Later Philosophy,” Synthese, 125 (3): 333–340. • –––, 2001, “Wittgenstein as His Own Worst Enemy: The Case of Gödel's Theorem,” Philosophia Mathematica, 9 (3): 257–279. • –––, 2009, “Empirical Regularities in Wittgenstein's Philosophy of Mathematics,” Philosophia Mathematica, 17 (1): 1–34. • Sullivan, Peter M., 1995, ‘Wittgenstein on “The Foundations of Mathematics”, June 1927,” Theoria, 105 (142): 105–142. • Tait, William, 1986, “Truth and Proof: The Platonism of Mathematics,” Synthese, 69: 341–370. • Van Atten, 2004, On Brouwer, Toronto: Wadsworth. • Van Dalen, Dirk, 2005, Mystic, Geometer, and Intuitionist: The Life of L.E.J. Brouwer: Hope and Disillusion Vol. II, Oxford: Clarendon Press. • Waismann, Friedrich, 1930, “The Nature of Mathematics: Wittgenstein's Standpoint,” in Ludwig Wittgenstein: Critical Assessments, Vol. III, Shanker (ed.), London: Croom Helm: 60–67. • Wang, Hao, 1958, “Eighty Years of Foundational Studies, Dialectica, 12: 466–497. • –––, 1984, “Wittgenstein's and Other Mathematical Philosophies,” Monist, 67: 18–28. • –––, 1991, “To and From Philosophy—Discussions with Gödel and Wittgenstein,” Synthese, 88: 229–277. • Watson, A. G. D., 1938, “Mathematics and Its Foundations,” Mind, 47 (188): 440–451. • Weyl, Hermann, 1921, “On the New Foundational Crisis of Mathematics,” Mathematische Zeitschrift, 10: 37–79; reprinted in From Brouwer to Hilbert, Paolo Mancosu, (ed.), Oxford: Oxford University Press: 86–118; translated by Benito Müller. • –––, 1925–1927, “The Current Epistemological Situation in Mathematics,” Symposion, 1: 1–32; reprinted in From Brouwer to Hilbert, Paolo Mancosu, (ed.), Oxford: Oxford University Press: 123–142. • –––, 1949 [1927], Philosophy of Mathematics and Natural Science, 2^nd Edition, Princeton, N.J.: Princeton University Press. • Whitehead, Alfred North, and Bertrand Russell, 1910, Principia Mathematica, Volume I, Abridged, Cambridge: Cambridge University Press; 1970. • von Wright, G.H., (ed.), 1973, Ludwig Wittgenstein: Letters to C.K. Ogden, Oxford: Basil Blackwell. • –––, 1982, Wittgenstein, Basil Blackwell, Oxford. • –––, 1982, “The Wittgenstein Papers,” in Wittgenstein, von Wright (ed.), Basil Blackwell, Oxford: 36–62. • Wright, Crispin, 1980, Wittgenstein on the Foundations of Mathematics, London: Duckworth. • –––, 1981, “Rule-following, Objectivity and the Theory of Meaning,” in Wittgenstein: To Follow a Rule, C. Leich and S. Holtzman, eds., London: Routledge: 99–117. • –––, 1982, “Strict Finitism,” Synthese, 51: 203–82. • –––, 1984, “Kripke's account of the Argument against Private Language,” Journal of Philosophy 71 (12): 759–778. • –––, 1986, “Rule-following and Constructivism,” in Meaning and Interpretation C. Travis (ed.), Oxford: Blackwell: 271–297. • –––, 1991, “Wittgenstein on Mathematical Proof,” in Wright 2001: 403–430. • –––, 2001, Rails to Infinity, Cambridge, Mass., Harvard University Press. • Wrigley, Michael, 1977, “Wittgenstein's Philosophy of Mathematics,” Philosophical Quarterly, 27 (106): 50–9. • –––, 1980, “Wittgenstein on Inconsistency,” Philosophy, 55: 471–84. • –––, 1993, “The Continuity of Wittgenstein's Philosophy of Mathematics,” in Wittgenstein's Philosophy of Mathematics, K. Puhl, (ed.), Vienna: Verlag Hölder-Pichler-Tempsky: 73-84. • –––, 1998, “A Note on Arithmetic and Logic in the Tractatus,” Acta Analytica, 21: 129–131. How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.
{"url":"https://plato.stanford.edu/archivES/FALL2017/Entries/wittgenstein-mathematics/index.html","timestamp":"2024-11-10T02:53:20Z","content_type":"text/html","content_length":"198119","record_id":"<urn:uuid:e0e4de8d-d938-4ffc-ad41-d135c59a5267>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00345.warc.gz"}
Nonlinear Dynamic Analysis of Annular FG Porous Sandwich Plates Reinforced by Graphene Platelets Department of Mechanical Engineering, Mashhad Branch, Islamic Azad University, Mashhad, Iran KEYWORDS ABSTRACT GPLs; In this paper, the nonlinear dynamic analysis of porous annular sandwich plates reinforced with graphene platelets (GPLs) under different boundary conditions is investigated. The Gaussian Random Field (GRF) alongside with Halpin-Tsai micromechanics model are used for the variational Poisson’s ratio and effective material property of the GPLs which are distributed Nonlinear in two forms of symmetric and non-symmetric patterns with different porosity dispersion models. Using Von-Karman nonlinear relations and different plate theories, the time-dependent dynamic governing equations are obtained and then solved using the dynamic relaxation (DR) method combined with implicit Newmark’s integration technique. Finally, some key elements namely: GPL analysis; weight fractions and distributions, porosity coefficients and dispersions, different loadings, boundary conditions, and the effects of thickness-to-radius ratio are discussed in detail. The results show that with an increase in porosity, the difference between the results of FSDT and MHSDT greatens. Also, a significant increase in plate stiffness is observed by adding a MHSDT; small amount of GPL to the porous core of the sandwich plate. 1. Introduction The strength and stability of structures have always been two important principles in science that have attracted the attention of many scientists and researchers. For this purpose, for instance, in the construction of high-speed trains, space rockets, defense industries, and space shuttles they have tried to make materials as advanced and more resistant to various conditions. Due to their high tensile strength, sandwich structures have always been of great importance to manufacturers they are generally made of two or multi-layered composites usually with a central core fabricated of foams like polystyrene, honeycombs, balsa woods or other equivalent substances and two face sheets made of epoxies, glass, carbon, sheet metals or any other similar material. Dynamic analysis of structures and plates with annular and circular geometries by various methods and theories has been carried out by many scientists and scholars because of their exclusive geometry and specific behavior under any sort of static and dynamic loadings and conditions. Working on determination of the dynamic response of large rectangular plates, Beskot and Leung [1] obtained the effects of viscoelastic damping values with the combination of FD, FE, and Laplace transform. Nath et al. [2] used the Chebyshev polynomial collocation point to couple with the Newmark k-β scheme for the time-dependent equations on plates and shells. Smaill [3] presented the importance of plate nonlinearity for the dynamic response of circular plates on Pasternak elastic foundations. Various numerical works are offered by Sirnivasan and Ramachandra [4] for different bore sizes on axisymmetric dynamic responses of annular and circular bimodulus plates. A study on the failure and dynamics of circular plates was done by Shen and Jones [5]. Day and Rao [6] studied the transient response of circular membranes and plates with the use of numerical Laplace transforms and finite difference in conjunction with numerical inversion technique, they presented the influence of interior and exterior viscoelastic damping on the responses. Bassi et al. [7] obtained pulsated results for sandwich plates using the FE method and Galerkin’s models. Submitting a new semi-analytical method for analyzing the vibrations of circular plates, Peng et al. [8] showed the accuracy of the proposed method among equivalent procedures. Damped schematics of a thick uniform plate under explosion loadings were analyzed including the rotary inertia influences by Aiyesimi et al. [9]. With the utilization of the FSDT of the Reddy plates, Eipakchi and Khadem Moshir [10] represented the viscoelastic resolutions for the transient response of annular plates. These are just a few of the limited and relevant aspects of the dynamic analysis of circular and annular plates with various material properties in the free literature done by individuals implementing various solving methods, showing the importance of such types of geometry in various engineering issues. Failure and difficulties in achieving satisfying results have always been major obstacles for scientists, thus with the introduction of functionally graded materials, they have created a new direction of discovery to lead individuals in the field of engineering to facilitate and overcome the obstacles ahead. An FG structure consists of variations in compositional properties through the volume with specific functions of the material, frequently maintaining a ceramic matter embedded in a metallic matrix which will lead to the increase in thermal resistance, corrosion persistence, toughness in strength, and stiffness of the material. Moreover, even with the manufacturing of sandwich materials, one can consider another way of strengthening them for the tensile and compressive stresses. In this way, sandwich structures are usually considered from two-layer to multi-layer composites, and for example, in three-layer plates, the middle layer is considered from a type of foam or FG mode, and the outer plates are from one metal material. Using the sinusoidal shear deformation theory of plates, Zenkour [11] investigated the critical buckling and natural frequency of FGM sandwich plates. Subjecting FG circular plates under low-velocity impact loadings, Dai et al. [12] studied the significance of the primary velocity of a striking ball in the responses of the plate using the Newmark method. Three years later, Dai [13] investigated the transient response of an FG multi-layered circular plate with central disks to demonstrate the difference in geometry parameters of single-layer and sandwich plates. Dynamical bending of stepped variational width annular and circular functionally graded plates has been examined by Molla-Alipour [14] employing a semi-analytical method on the basis of power series. Effects of porosity on bending, buckling, free vibrations and dynamic instability behaviors of different FG plates and sandwich structures [15-21] are done by some researchers on several geometries of beams, plates, and cylindrical shells with the application of various theories namely: FE, FD, DQM, Chebyshev-Ritz, HSDT, and isogeometric analysis. Babaei et al. [22], acquired natural frequencies and responses of FG annular sector plates and cylindrical shells with the acquisition of 3D elasticity theory. Implementing the kinetic dynamic relaxation method and FSDT formulations, Esmaeilzadeh et al. [23] developed a non-local strain gradient model for the numerical investigation of bilateral FG nanoplates with porous properties. Under pulse loadings, using a Kelvin-Voigt model, forced motions of viscoelastic FG porous beams are investigated by the use of the adopted FE method for the first time by Akbas et al. [24]. With further observations, porosity a well-noted item used in many works is one useful factor to overrule the flaws in mechanical properties of structures, despite their advantages such as ductility reductant, creep resistance, adhesion regressive and owning light weighting features, they contain a major defect which causes decreasing structural stiffness and strength. Thus, scientists and researchers have come to the conclusion of embedding certain structures with micro and nano fillers to overcome this deficiency. Free vibrations and buckling of graphene platelets (GPLs) reinforced FGM porous beams is the subject of an article surveyed by Kitipornchai et al. [25] based on the theory of Timoshenko beams and Ritz method for the natural fundamental frequency of the nanocomposite structure. On the same geometry with the same material properties, Chen et al. [26] studied the nonlinear responses and post-buckling behavior of GPL-reinforced material using the Von Karman nonlinear large deflections. Yang et al. [27], based on the Chebyshev-Ritz method and first-order shear deformation theory of plates, acted on the investigation of vibrations and buckling of porous nanocomposite plates reinforced by GPLs. Polit et al. [28] investigated the stability and bending results of porous GPL-reinforced curved beams based on the theory of higher-order shear deformation of plates by the introduction of Navier’s procedure for the analytical results. Li et al. [29] studied the nonlinear responses and buckling solutions of a porous sandwich rectangular plate with Winkler-Pasternak foundations reinforced by graphene platelets for the observations of porosity effects and GPL weight fraction and loading velocity influence on the composite structure behavior. Esmaeilzadeh and Kadkhodayan [30] aimed the investigation of transient behaviors of a moving porous FGM sandwich rectangular nanoplate reinforced by GPLs and effects of non-local strain gradient parameters, porosity, GPL weight fraction, and variant nanoplate velocities are considered in this paper. Safarpour et al. [31] considered a parametric 3D bending and frequency study on annular and circular functionally graded porous plates embedded with graphene nanofillers using DQM. Based on the theory of elasticity, Rahimi et al. [32], discussed on 3D static and vibrations of porous FG cylindrical shells in 2019. Zhao et al. [33], worked on instability factors affecting dynamic response of porous FG arches consolidated with GPL, based on Euler-Bernoulli classical theory. For the dynamic instable territory, a Galerkin approach was applied for the derivation of Mathieu-Hill equation. Their results show that by adding a small amount of GPL and with the use of uniform asymmetric distribution of porosities in arch composite plates, one can increase stability and resistance to a considerable extent. Lieu et al. [34] presented an isogeometric Bezier formulation for transient response and bending analysis of FG porous plates reinforced by GPL, deriving the equations of motions using a generalized HSDT in couple with Bezier isogeometric formulation and Newmark integration method for the time-varying equations. Based on modified strain gradient and first-order plates theory, Arshid et al. [35] studied on bending, buckling, and free vibrations of annular micro-scaled functionally graded porous plate which is reinforced with graphene nanoplatelets using the GDQ method. Results for wave propagation through FG-GPL reinforced porous rectangular plates were accomplished by Gao et al. [36] using three general plate theories namely CPT, FSDT, and TSDT. Results show that different plate theories can have accurate results on a lower number of waves, whilst the FSDT and TSDT show better results for a higher number of waves. Nejadi et al. [37] considered studies on vibrations and stability of sandwich pipes with porosities and GPLs using the differential quadrature method. In their paper, the essential impact of the velocity of fluid flow on the stability of the structure is illustrated. TAO and Dai [38] investigated the post-buckling of cylindrical sandwich porous shells with GPL reinforcement based on higher order shear deformation theory. Results show with more addition of GPL to the FG core, one can acquire much strength in post buckling behaviors. In 2020, Khayat et al. [39], analyzed uncertain dissemination over smart porous sandwich GPL reinforced cylindrical shells based on HSDT in conjunction with a Fourier differential quadrature method. Nguyen et al. [40-44] conducted several researches investigating the influence of porosity coefficient, weight fraction of GPLs, electrical voltage, material length scale parameter, boundary condition and dynamic loads on FG porous plates that are reinforced with GPLs. They proposed an efficient numerical model based on refined plate theory and isogeometric analysis to predict the static and dynamic characteristics of functionally graded microplates reinforced with graphene platelets. One disadvantage of the GPLs, that can be significantly challenging, is called the GPLs agglomeration phenomenon that can negatively impact the properties of the resulting nanocomposite material. The phenomenon occurs due to the strong van der Waals forces between the GPLs, which can lead to the formation of large aggregates. The presence of these aggregates can reduce the effective surface area and increase the stress concentrations in the nanocomposite, leading to a decrease in mechanical strength and an increase in brittleness. Nguyen et al. [45] studied the transient performance of agglomerated graphene platelets reinforced porous sandwich plates based on higher-order shear deformation plate theory and using a NURBS-based isogeometric analysis framework. Through recent years, the combination of graphene nanofillers and porosity has been the focus of many researchers. Therefore, many works including bending, buckling, and post-buckling, free and forced vibrations are done and developed on various circular, rectangular, and shell geometries using various numerical solution and integration methods. With further observations and surveys through open literature, there was no evidence of dealing with nonlinear dynamics of FG annular porous plates reinforced with graphene platelets. This paper aims the study of dynamic analysis of annular functionally graded porous GPL-reinforced sandwich plates using FSDT and MHSDT with two GPL distribution patterns and two porosity dispersions, under a simple harmonic and an impact loading with clamped and simply supported boundary conditions. GPLs reinforcement phase in this work is distributed through the core of the sandwich plate. In sandwich plate structures, the distribution of the GPLs reinforcement phase in the core layer can enhance mechanical properties, such as compressive and shear strength [29]. Moreover, distributing GPLs in the core layer can create a highly interconnected network within the core material and the mechanical properties and resistance to fatigue and impact of the sandwich structure can be improved [30]. The time-dependent equations are derived by the implementation of the principle of minimum potential energy and then solved using Newmark’s direct integration method in combination with the viscous dynamic relaxation technique, which has not been previously employed in the literature for analyzing the dynamic behavior of sandwich structures. Finally, the effects of porosity coefficient and dispersions, GPL weight fraction and dispensations and thickness-to-radius ratios are illustrated. 2. Theoretical Modeling of Material Properties As seen in Fig. 1, the annular FG sandwich graphene-reinforced porous plate is shown with an inner radius and outer radius , the total thickness of (including = core and = face layers thicknesses) in and directions, originated the Cartesian coordinate is assumed at the center of the plate. The whole thickness of the plate includes . Fig. 1. Geometrical illustration of annular graphene-reinforced porous sandwich plate Figure 2 represents two GPL distributions, namely A and B with different porosity dispersion patterns as I and II for symmetric and asymmetric material. The volume fraction of the GPL as is assumed to be varying along the axis, with maximum values of and (see relation (7)). According to Fig. 2, E(z), G(z), and ρ(z) which are named elasticity moduli, shear moduli, and mass density of the porous GPL-reinforced sandwich plate, respectively, are defined as below for non-uniform graphene distributions [46]: in which is defined as [47]: For asymmetric porosity dispersion For symmetric porosity dispersion Also, denotes porosity coefficient ( ) and is the representative of the mass density coefficient. On the basis of the closed-cell cellular solid structures under the Gaussian Random Field scheme (GRF), it can be expressed to determine the mass density coefficient as [48]: Furthermore, based on the micromechanical model of Halpin-Tsai, the effective elastic moduli of the porous core can be defined as [49]: where and are the GPL property factors, also depict the young’s moduli of the metallic matrix. For the unknown values of the GPL factors, we have [49]: Fig. 2. GPL distributions and porosity dispersion patterns where , and are the average length, width, and thickness of the GPLs, respectively. Also, based on the GRF scheme, the varying Poisson’s ratio of the core is obtained by [50]: As depicted in Fig. 2, for different GPL distribution patterns, the volume fraction distribution ( ) along the z direction is given by [51]: GPL distribution A GPL distribution B in which the relationship between the volume fraction and weight fraction of the GPLs is defined by [52]: Utilizing the rule of mixture, and the mass density and Poisson’s ratio of the GPL-reinforced platelets, respectively, can be calculated as below [53]: in which , , , , and are mass density, Poisson’s ratio, and volume fraction of GPLs and metals, respectively. Fig. 3. Modules of elasticity and Poisson’s ratio along the thickness of the sandwich plate for different GPLs distributions and porosity patterns Modules of elasticity and Poisson’s ratio of the sandwich plate are illustrated in Fig. 3 for different GPLs distributions and porosity patterns using the properties expressed in Table 3. 3. Fundamental Equations Based on the modified higher-order shear deformation theory (primarily proposed by [54]), the displacement field of the plate is described by: where the displacements of the composite plate are formed of , and in the orthogonal coordinate system. Moreover, and are the displacements along r and z axes and also is the rotation terms of the middle surface (i.e., z = 0). Furthermore, the term in MHSDT is a mathematical parameter that cannot be physically defined. Besides, f(z) is a functional term that can be presumed as required in calculation. It should be noted that considering f(z) as 0, one can consider the first-order shear deformation theory of the plates (FSDT) for the displacement field. The simplicity and efficiency of using this theory are considering various functions in the displacement field and obtaining accurate results for different conditions. Considering the Von-Karman nonlinear large deflection relation, the strain field accommodating with the displacement field of equation (10) is defined as follows: in which normal strain fields and are directed towards and , respectively, and is the shear strain term. According to Hook’s law, the stress field for the core and face layers, are: For Face layers For GPL porous core Stress and moment resultants are defined as: Substituting equations into leads to the following constitutive relations: Where the elastic constants are expressed as: in which expresses the stiffness matrix of the layer number (n) as follows: The equations of motion can be obtained by implementation of the principle of minimum potential energy, which is expressed by the following integral: in which , and are strain energy changes due to internal loads, work changes due to external loads, and virtual kinetic energy changes, respectively. Changes in strain energy, virtual work, and kinetic energy of the whole system are defined as: For instance, by substituting strain components in terms of displacement field into the variations in strain energy relation yields: By integrating over the thickness of Eq. and substituting the stress and moment results, the following equation for the total thickness of the plate will be obtained: Integrating from each term of relation and implementing the fractional integration technique, results in the below relation in which the singleton integrals show the boundary conditions and the dual integrals represent the governing equations: By introducing mass inertia terms as follows: The kinetic energy changes can be expressed by: Finally, substituting Eq. into Eq. and giving zero to , , and , the equations of motion will result in [55]: We could also obtain the static equilibrium equations in terms of displacement field , , and as follows: To complete the formulations, equations should be joined with a set of initial and boundary conditions, as below: 4. Solving Procedure To discretize the time-dependent equations of motion of annular GPL-reinforced porous sandwich composite plate, the implicit Newmark method is utilized in this paper, and in order to solve the partial differential equations of motion, the viscous dynamic relaxation method (V-DR) with central finite difference technique is exploited. 4.1. Newmark Direct Integration Method The main aim of the Newmark approach is to discretize the time-varying equations using a reduced Taylor series by determination of accelerations and velocities in real forms at the next time step ( ) where and are Newmark’s constant parameters which can be determined to gain integration stability and accuracy, taken as and (average acceleration method), is the time interval, the difference of current and prior real-time displacement, velocity, and acceleration, respectively. Also, represents the displacements ( = , ) at and Placing equations and into the equilibrium equations will become: where the following are Newmark coefficients: Hence, for the sake of briefness, the following shrunken term should be written for the equations of motion: in which and are the equivalent load vector and stiffness matrix at DR iteration. 4.2. Viscous Damping Dynamic Relaxation Method Based on the dynamic relaxation method, to obtain stable solutions, the governing equations of motion will be converted from a static space into a fictitious dynamic one via the transformation of boundary value into initial value problems. The conversion would happen through the addition of damping and inertia terms to equilibrium equations [56]: in which and in the above relation are fictitious diametrical mass and damping matrices respectively, also and are the acceleration and velocity, respectively. Accurate estimations of mass matrix and damping factors are the criteria of convergence and stability in the DR method, thus based on the theorem of Gershgorin, they will be estimated as [56]: where in, is , , and w also known as freedom degrees of structure, is critical damping coefficient at spatial node, depicts fictitious incremental time which is generally taken as unity and the element of the stiffness matrix is determined as . To calculate the stiffness matrix, we have: in which are vectors of approximate solution. A set of finite difference statements should be written in order to finalize the iterative procedure by using the acceleration and velocity terms as The velocities at the next time step can be defined as [57]: Now out of balance force vector and kinetic energy of the system can be calculated as: In each time step by applying integration on velocities, the displacements will be calculated by: The V-DR process is continued with iterative steps to fulfill desired convergence criteria, i.e., and The following flowchart (Fig. 4) explains the V-DR method in combination with the Newmark direct integration technique: Fig. 4. Newmark direct integration in combination with the Dynamic relaxation method 5. Numerical Results 5.1. Validation As a first example to prove the accuracy of the present study in this section, the maximum deflection of a circular FG plate based on FSDT is compared with those of Reddy et al. [58]. The plate is subjected to a uniform distributed load with clamped and simply supported boundary conditions. The effect of different power law indices with three thickness-to-radius ratios are presented in the following. The material properties and uniform load which are used in this example are: Tables 1 and 2 show the great consistency of the viscous damping DR method with those gained by Reddy et al. [58]. Table 1. Comparison of nondimensional maximum deflection in simply supported condition with Ref. [58] Thickness radius ratio, Reddy et al. [58] Present study 0 10.481 10.623 10.822 10.469 10.623 10.820 2 5.539 5.610 5.708 5.534 5.609 5.706 4 5.153 5.217 5.307 5.155 5.219 5.308 8 4.810 4.870 4.954 4.810 4.864 4.955 10 4.712 4.772 4.855 4.711 4.764 4.857 50 4.291 4.338 4.428 4.286 4.338 4.430 100 4.223 4.280 4.359 4.220 4.278 4.358 1000 4.158 4.214 4.293 4.155 4.218 4.292 10e05 4.151 4.207 4.285 4.150 4.204 4.285 Table 2. Comparison of nondimensional maximum deflection in clamped supported condition with Ref. [58] Thickness radius ratio, Reddy et al. [58] Present study 0 2.639 2.781 2.979 2.635 2.778 2.971 2 1.444 1.515 1.613 1.441 1.511 1.614 4 1.320 1.384 1.473 1.317 1.374 1.476 8 1.217 1.278 1.362 1.215 1.275 1.360 10 1.190 1.250 1.333 1.188 1.240 1.333 50 1.080 1.137 1.216 1.078 1.134 1.219 100 1.063 1.119 1.199 1.054 1.118 1.198 1000 1.047 1.103 1.182 1.048 1.107 1.182 10e05 1.045 1.101 1.180 1.042 1.101 1.180 For a second example to prove the validity and precision of the Newmark integration method, the results of forced vibration analysis under impulsive loading with simply supported boundary conditions are compared with those reported by Ref. [59]. Since there is no evidence based on dynamic analysis of annular FG sandwich porous plates reinforced by graphene platelets in open literature, the following sample is presented in which the plate is degraded into a single layer FG annular plate conforming a power-law function with varying Poisson’s ratio based on Mori-Tanaka scheme through different material grading indices whose material properties are: Figure 5 shows normalized nondimensional deflection at normalized radius point versus nondimensional time with . The results show the efficiency and accuracy of the procedure and are found to be in great consistency with the analytical solution of Ref. [59]. Fig. 5. Comparison study for the dynamic behavior of simply supported FG annular plate with different power law indices under impulsive loading. To carry out the convergence study to state the number of spatial nodes, the following results are achieved for both boundary conditions based on FSDT and MHSDT. For instance, non-dimensional deflection of clamped-clamped porous GPL reinforced annular plate under an impulsive loading ( is illustrated in Fig. 6 in terms of time for different node numbers. From the results, the 40 and 30 nodes for FSDT and MHSDT, respectively, are considered for the analysis of the entire process because their responses have acceptable precision with suitable time of analysis. Fig. 6. Illustrations of the number of spatial nodes with GPL distribution A, Porosity dispersion II and , , for (a) FSDT and (b) MHSDT It is noticed that the same number of nodes are used for simply supported conditions with and different GPL distributions, porosity coefficients, and loadings. From Fig. 6 and similar analyses for convergence study, it can be concluded that modified-higher order theory can achieve more efficient and accurate results with fewer number of nodes. 5.2. Parametric Study This section is devoted to investigating the influence of some substantial factors namely porosity dispersion and coefficients, GPL distribution and weight fractions, aspect ratio, and boundary conditions under two types of loadings on dynamic results of the GPL-reinforced porous FGM annular sandwich plate. The isotropic face sheets are assumed to be completely interconnected with the porous core and have the same properties as the metallic matrix of the core. Implemented material properties of the sandwich plate are taken from [49] shown in Table 3: Table 3. The material property of GPL-reinforced porous core and isotropic face sheets [49] Elasticity moduli Density Poisson’s ratio (Gpa) (Kg/m^3) 1010 1062.5 0.186 68.3 2689.8 0.34 (Face sheets and metallic matrix) To carry out the parametric study, an annular graphene platelet reinforced porous sandwich plate with different aspect ratios and geometric parameters with ( ) and ( ), and are assumed in this section. Also, two types of loadings, an impact, and a simple harmonic excitation, are applied on the upper surface of the sandwich plate as follows: For all cases, the non-dimensional dynamic deflections are computed with the following loading specifications at normalized point R which are defined below: Also, the GPL parameters are taken as , and The effect of thickness-to-radius ratio for the FSDT and MHSDT subjected to an impact loading with S-S and C-C boundary conditions and are illustrated in figures 7 and 8 for impact and harmonic loadings, respectively. As shown in Figures 7 and 8, the difference between FSDT and MHSDT becomes greater as the thickness of the FG sandwich annular plate is increased. One reason for this occurrence is the lack of accuracy in FSDT for thicker plates due to the consideration of shear strain as linear, hence with higher-order displacement fields, one can achieve displacements with higher accuracy. Also, it can be observed that the mentioned differences are more noticeable in S-S boundary conditions compared to C-C ones. Fig. 7. Effects of aspect ratio on the dynamic behavior of sandwich annular plate subjected to impact loading with , , for (a), (b) and (c), (d) Fig. 8. Effects of aspect ratio on the dynamic behavior of sandwich annular plate subjected to harmonic loading with , , for (a), (b) and (c), (d) Nondimensional deflection versus dimensionless time ( ) for two graphene distributions A and B with different porosity dispersions namely I and II under impact and harmonic loadings are shown in figures 9 and 10, respectively. As seen in Figures 9 and 10, the effect of weight fraction on the dynamic behavior of sandwich porous plate is also considered for both theories of MHSDT and FSDT with different thickness-to-radius ratios and boundary conditions. As shown in Fig. 9, adding to the porous core of the sandwich plate, the flexural rigidity will increase with the addition of to the porous core of sandwich plate, the flexural rigidity will increase significantly, for instance, this increase is about 27.2% for porosity dispersion II in FSDT and 25% in MHSDT. Similarly, using porosity dispersion I, this increase is about 14.2% and 14.8% for FSDT and MHSDT, respectively. As illustrated in Fig. 10 for GPL distribution B and clamped-clamped supported porous sandwich annular plate with h⁄r_o =0.15 under a harmonic loading, with the addition of only 0.8 wt.% to the porous plate, an increase in bending rigidity is observed. In this case, the values of 16.2% and 13.6% are seen for dispersions II and I in FSDT and the ones of 17.9% and 13.9%, respectively, in MSHDT. Also, as observed by adding more GPL to the porous core leads to a great decrease in the amplitude of vibrational waves of the whole structure in which the results for MHSDT are observed to be more accurate amongst the FSDTs by revealing more stable peak point kinetic energy at the end of each Dynamic Relaxation algorithm. Furthermore, as figures 9 and 10 illustrate, for thicker plates, porosity dispersion II combined with GPL distributions A and B reveal higher deflection changes in the maximum porosity coefficient between MHSDT and FSDT. The greater the porosity coefficient, the more reduction in the stiffness of the plate, therefore the following particularly discusses the effect of the porosity coefficient on the dynamic history of porous sandwich annular plate reinforced by graphene platelets. Fig. 9. Effect of GPL weight fraction on the dynamic behavior of an S-S edged porous sandwich plate subjected to an impact loading using graphene distribution A Fig. 10. Effect of GPL weight fraction on dynamic behavior of a C-C edged porous sandwich plate subjected to a harmonic loading using graphene distribution B with Figure 11 shows the dynamic behavior of a C-C edged porous sandwich plate subjected to impact and harmonic loadings in graphene distribution A and porosity dispersions I, II with based on both FSDT and MHSDT. The GPL weight fraction is considered a constant amount of 0.6% in this section and the porosity coefficient is changing from 0.2 to 0.8. The stiffness of the plate shows better reinforcement behavior in graphene distribution A combined with porosity dispersion II as the porosity coefficient greatens. Figure 12 discusses the dynamic response of an S-S conditioned graphene-reinforced sandwich annular plate under impact loading with based on both theories with GPL distribution B and porosity distributions I and II. It shows that the same as clamped boundary conditions, the porosity dispersion II has a bigger influence on the strengthening of the porous core in simply supported conditions. Comparing the time history results in figures 7 to 12 with respect to the strength reinforcing of the plates leads to the following best order of composition of porosity and GPL distributions which is (GPL A-Porosity II), (GPL B-porosity II), (GPL A-Porosity II) and (GPL B-Porosity I). Fig. 11. Effect of porosity on the dynamic behavior of a C-C-edged porous sandwich plate subjected to impact and harmonic loadings with graphene distribution A Fig. 12. Effect of porosity on the dynamic behavior of an S-S-edged porous sandwich plate subjected to an impact loading with graphene distribution B and Ultimately, in order to assess the relative efficacy of each combined GPL and porosity pattern, the outcomes of an impulsive loading on the plate are presented in Figure 13. The figure demonstrates that the optimal reinforcement capacity is achieved through the implementation of GPL distribution A in conjunction with porosity dispersion II. Fig. 13. Effect of GPL distributions and porosity patterns in terms of time history for C-C edged porous sandwich plate with 6. Conclusions and Remarks This paper investigates the dynamic analysis of annular functionally graded porous GPL-reinforced sandwich plates based on both MHSDT and FSDT and different boundary conditions. According to closed-cell cellular solids with Gaussian Random Field and Halpin-Tsai micromechanics, the effective material properties of the porous core are developed. The Newmark direct integration technique in combination with the viscous Dynamic Relaxation method is applied to solve time-dependent equations of motion. In fact, the primary and innovative aspect of this approach lies in the combination of the viscous dynamic relaxation method with the Newmark integration method, which has not been previously employed in the literature for sandwich structures. Additionally, the utilization of the modified higher-order shear deformation theory, two graphene distributions, and two porosity dispersions containing various GPL weight fractions and pore coefficients are considered for the porous core. Considering the dynamic behavior of porous sandwich plates under impact and harmonic loads with S-S and C-C boundary conditions and different aspect ratios, some remarkable points are concluded as follows: • An increase of more than 27 % in plate stiffness is observed by adding only 0.8 wt.% GPL to the porous core of the sandwich plate. • Both symmetric (II) and asymmetric (I) porosity dispersions play a significant role in the dynamic behavior of the plate, however symmetric porosity dispersion (II) owns the most influence in deflection decrease. Also, among GPL distributions A and B, the non-uniform symmetric graphene distribution A acts as the best strengthener pattern. • A combination of (GPL A-Porosity II), (GPL B-porosity II), (GPL A-Porosity II), and (GPL B-Porosity I), respectively, leads to the best order of strength reinforcing results for the plates. • With an increase in porosity, the difference between the results of FSDT and MHSDT greatens. Conflicts of Interest The author declares that there is no conflict of interest regarding the publication of this manuscript. [1] Beskos, D. & Leung, K., 1984. Dynamic response of plate systems by combining finite differences, finite elements and laplace transform. Computers & structures, 19 (5-6), pp.763-775. [2] Nath, Y., Dumir, P. & Bhatiaf, R., 1985. Nonlinear static and dynamic analysis of circular plates and shallow spherical shells using the collocation method. International journal for numerical methods in engineering, 21 (3), pp.565-578. [3] Smaill, J., 1990. Dynamic response of circular plates on elastic foundations: Linear and non-linear deflection. Journal of sound and vibration, 139 (3), pp.487-502. [4] Srinivasan, R. & Ramachandra, L., 1990. Axisymmetric nonlinear dynamic response of bimodulus annular plates. [5] Shen, W.Q. & Jones, N., 1993. Dynamic response and failure of fully clamped circular plates under impulsive loading. International journal of impact engineering, 13 (2), pp.259-278. [6] Dey, S. & Rao, V.T., 1997. Transient response of circular plates and membranes: A numerical approach. International journal of mechanical sciences, 39 (12), pp.1405-1413. [7] Bassi, A., Genna, F. & Symonds, P., 2003. Anomalous elastic–plastic responses to short pulse loading of circular plates. International Journal of Impact Engineering, 28 (1), pp.65-91. [8] Peng, J.-S., Yuan, Y.-Q., Yang, J. & Kitipornchai, S., 2009. A semi-analytic approach for the nonlinear dynamic response of circular plates. Applied Mathematical Modelling, 33 (12), pp.4303-4313. [9] Aiyesimi, Y., Mohammed, A. & Sadiku, S., 2011. A finite element analysis of the dynamic responses of a thick uniform elastic circular plate subjected to an exponential blast loading. American Journal of Computational and Applied Mathematics, 1 (2), pp.57-62. [10] Eipakchi, H. & Khadem Moshir, S., 2020. Dynamic response determination of viscoelastic annular plates using fsdt–perturbation approach. Journal of Computational Applied Mechanics, 51 (1), [11] Zenkour, A., 2005. A comprehensive analysis of functionally graded sandwich plates: Part 2—buckling and free vibration. International Journal of Solids and Structures, 42 (18-19), pp.5243-5258. [12] Dai, H.-L., Guo, Z.-Y. & Yang, L., 2013. Nonlinear dynamic response of functionally graded materials circular plates subject to low-velocity impact. Journal of Composite Materials, 47 (22), [13] Dai, H.-L., Dai, T. & Cheng, S.-K., 2015. Transient response analysis for a circular sandwich plate with an fgm central disk. Journal of Mechanics, 31 (4), pp.417-426. [14] Molla-Alipour, M., 2016. Dynamic behavior analysis of fg circular and annular plates with stepped variations of thickness under various load. Modares Mechanical Engineering, 16 (7), pp.251-260. [15] Arshid, E. & Khorshidvand, A.R., 2018. Free vibration analysis of saturated porous fg circular plates integrated with piezoelectric actuators via differential quadrature method. Thin-Walled Structures, 125, pp.220-233. [16] Chen, D., Yang, J. & Kitipornchai, S., 2019. Buckling and bending analyses of a novel functionally graded porous plate using chebyshev-ritz method. Archives of Civil and Mechanical Engineering, 19 (1), pp.157-170. [17] Cuong-Le, T., Nguyen, K.D., Nguyen-Trong, N., Khatir, S., Nguyen-Xuan, H. & Abdel-Wahab, M., 2021. A three-dimensional solution for free vibration and buckling of annular plate, conical, cylinder and cylindrical shell of fg porous-cellular materials using iga. Composite Structures, 259, pp.113216. [18] Daikh, A.A. & Zenkour, A.M., 2019. Effect of porosity on the bending analysis of various functionally graded sandwich plates. Materials Research Express, 6 (6), pp.065703. [19] Daikh, A.A. & Zenkour, A.M., 2019. Free vibration and buckling of porous power-law and sigmoid functionally graded sandwich plates using a simple higher-order shear deformation theory. Materials Research Express, 6 (11), pp.115707. [20] Fouda, N., El-Midany, T. & Sadoun, A., 2017. Bending, buckling and vibration of a functionally graded porous beam using finite elements. Journal of applied and computational mechanics, 3 (4), [21] Rahmani, M., Mohammadi, Y. & Kakavand, F., 2019. Vibration analysis of different types of porous fg circular sandwich plates. ADMT Journal, 12 (3), pp.63-75. [22] Babaei, M., Hajmohammad, M.H. & Asemi, K., 2020. Natural frequency and dynamic analyses of functionally graded saturated porous annular sector plate and cylindrical panel based on 3d elasticity. Aerospace Science and Technology, 96, pp.105524. [23] Esmaeilzadeh, M., Golmakani, M. & Sadeghian, M., 2020. A nonlocal strain gradient model for nonlinear dynamic behavior of bi-directional functionally graded porous nanoplates on elastic foundations. Mechanics Based Design of Structures and Machines, pp.1-20. [24] Akbaş, Ş., Fageehi, Y., Assie, A. & Eltaher, M., 2020. Dynamic analysis of viscoelastic functionally graded porous thick beams under pulse load. Engineering with Computers, pp.1-13. [25] Kitipornchai, S., Chen, D. & Yang, J., 2017. Free vibration and elastic buckling of functionally graded porous beams reinforced by graphene platelets. Materials & Design, 116, pp.656-665. [26] Chen, D., Yang, J. & Kitipornchai, S., 2017. Nonlinear vibration and postbuckling of functionally graded graphene reinforced porous nanocomposite beams. Composites Science and Technology, 142, [27] Yang, J., Chen, D. & Kitipornchai, S., 2018. Buckling and free vibration analyses of functionally graded graphene reinforced porous nanocomposite plates based on chebyshev-ritz method. Composite Structures, 193, pp.281-294. [28] Polit, O., Anant, C., Anirudh, B. & Ganapathi, M., 2019. Functionally graded graphene reinforced porous nanocomposite curved beams: Bending and elastic stability using a higher-order model with thickness stretch effect. Composites Part B: Engineering, 166, pp.310-327. [29] Li, Q., Wu, D., Chen, X., Liu, L., Yu, Y. & Gao, W., 2018. Nonlinear vibration and dynamic buckling analyses of sandwich functionally graded porous plate with graphene platelet reinforcement resting on winkler–pasternak elastic foundation. International Journal of Mechanical Sciences, 148, pp.596-610. [30] Esmaeilzadeh, M. & Kadkhodayan, M., 2019. Numerical investigation into dynamic behaviors of axially moving functionally graded porous sandwich nanoplates reinforced with graphene platelets. Materials Research Express, 6 (10), pp.1050b7. [31] Safarpour, M., Rahimi, A., Alibeigloo, A., Bisheh, H. & Forooghi, A., 2019. Parametric study of three-dimensional bending and frequency of fg-gplrc porous circular and annular plates on different boundary conditions. Mechanics Based Design of Structures and Machines, pp.1-31. [32] Rahimi, A., Alibeigloo, A. & Safarpour, M., 2020. Three-dimensional static and free vibration analysis of graphene platelet–reinforced porous composite cylindrical shell. Journal of Vibration and Control, 26 (19-20), pp.1627-1645. [33] Zhao, S., Yang, Z., Kitipornchai, S. & Yang, J., 2020. Dynamic instability of functionally graded porous arches reinforced by graphene platelets. Thin-Walled Structures, 147, pp.106491. [34] Nguyen, L.B., Nguyen, N.V., Thai, C.H., Ferreira, A. & Nguyen-Xuan, H., 2019. An isogeometric bézier finite element analysis for piezoelectric fg porous plates reinforced by graphene platelets. Composite Structures, 214, pp.227-245. [35] Arshid, E., Amir, S. & Loghman, A., 2020. Static and dynamic analyses of fg-gnps reinforced porous nanocomposite annular micro-plates based on msgt. International Journal of Mechanical Sciences, 180, pp.105656. [36] Gao, W., Qin, Z. & Chu, F., 2020. Wave propagation in functionally graded porous plates reinforced with graphene platelets. Aerospace Science and Technology, 102, pp.105860. [37] Nejadi, M., Mohammadimehr, M. & Mehrabi, M., 2021. Free vibration and stability analysis of sandwich pipe by considering porosity and graphene platelet effects on conveying fluid flow. Alexandria Engineering Journal, 60 (1), pp.1945-1954. [38] Tao, C. & Dai, T., 2021. Isogeometric analysis for postbuckling of sandwich cylindrical shell panels with graphene platelet reinforced functionally graded porous core. Composite Structures, 260, [39] Khayat, M., Baghlani, A. & Najafgholipour, M., 2021. The propagation of uncertainty in the geometrically nonlinear responses of smart sandwich porous cylindrical shells reinforced with graphene platelets. Composite Structures, 258, pp.113209. [40] Nguyen, N.V., Lee, J. & Nguyen-Xuan, H., 2019. Active vibration control of gpls-reinforced fg metal foam plates with piezoelectric sensor and actuator layers. Composites Part B: Engineering, 172 , pp.769-784. [41] Nguyen, N.V., Nguyen, L.B., Nguyen-Xuan, H. & Lee, J., 2020. Analysis and active control of geometrically nonlinear responses of smart fg porous plates with graphene nanoplatelets reinforcement based on bézier extraction of nurbs. International Journal of Mechanical Sciences, 180, pp.105692. [42] Nguyen, N.V., Nguyen-Xuan, H., Lee, D. & Lee, J., 2020. A novel computational approach to functionally graded porous plates with graphene platelets reinforcement. Thin-Walled Structures, 150, [43] Nguyen, N.V. & Lee, J., 2021. On the static and dynamic responses of smart piezoelectric functionally graded graphene platelet-reinforced microplates. International Journal of Mechanical Sciences, 197, pp.106310. [44] Nguyen, N.V., Phan, D.-H. & Lee, J., 2022. Nonlinear static and dynamic isogeometric analysis of functionally graded microplates with graphene-based nanofillers reinforcement. Aerospace Science and Technology, 127, pp.107709. [45] Nguyen, N.V., Phan, D.-H. & Lee, J., 2023. On the transient performance of agglomerated graphene platelets-reinforced porous sandwich plates. Thin-Walled Structures, 183, pp.110316. [46] Barati, M.R. & Zenkour, A.M., 2019. Analysis of postbuckling of graded porous gpl-reinforced beams with geometrical imperfection. Mechanics of Advanced Materials and Structures, 26 (6), [47] Gao, K., Gao, W., Chen, D. & Yang, J., 2018. Nonlinear free vibration of functionally graded graphene platelets reinforced porous nanocomposite plates resting on elastic foundation. Composite Structures, 204, pp.831-846. [48] Ansari, R., Hassani, R., Gholami, R. & Rouhi, H., 2020. Nonlinear bending analysis of arbitrary-shaped porous nanocomposite plates using a novel numerical approach. International Journal of Non-Linear Mechanics, 126, pp.103556. [49] Li, K., Wu, D., Chen, X., Cheng, J., Liu, Z., Gao, W. & Liu, M., 2018. Isogeometric analysis of functionally graded porous plates reinforced by graphene platelets. Composite Structures, 204, [50] Roberts, A. & Garboczi, E.J., 2002. Computation of the linear elastic properties of random porous materials with a wide variety of microstructure. Proceedings of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences, 458 (2021), pp.1033-1054. [51] Dong, Y., Li, Y., Chen, D. & Yang, J., 2018. Vibration characteristics of functionally graded graphene reinforced porous nanocomposite cylindrical shells with spinning motion. Composites Part B: Engineering, 145, pp.1-13. [52] Barati, M.R. & Zenkour, A.M., 2019. Vibration analysis of functionally graded graphene platelet reinforced cylindrical shells with different porosity distributions. Mechanics of Advanced Materials and Structures, 26 (18), pp.1580-1588. [53] Nguyen, Q.H., Nguyen, L.B., Nguyen, H.B. & Nguyen-Xuan, H., 2020. A three-variable high order shear deformation theory for isogeometric free vibration, buckling and instability analysis of fg porous plates reinforced by graphene platelets. Composite Structures, 245, pp.112321. [54] Dastjerdi, S., Abbasi, M. & Yazdanparast, L., 2017. A new modified higher-order shear deformation theory for nonlinear analysis of macro-and nano-annular sector plates using the extended kantorovich method in conjunction with sapm. Acta Mechanica, 228 (10), pp.3381-3401. [55] Dastjerdi, S. & Abbasi, M., 2020. A new approach for time-dependent response of viscoelastic graphene sheets embedded in visco-pasternak foundation based on nonlocal fsdt and mhsdt theories. Mechanics of Time-Dependent Materials, 24 (3), pp.329-361. [56] Golmakani, M. & Kadkhodayan, M., 2011. Nonlinear bending analysis of annular fgm plates using higher-order shear deformation plate theories. Composite Structures, 93 (2), pp.973-982. [57] Rezaiee-Pajand, M., Alamatian, J. & Rezaee, H., 2017. The state of the art in dynamic relaxation methods for structural mechanics part 1: Formulations. Iranian Journal of Numerical Analysis and Optimization, 7 (2), pp.65-86. [58] Reddy, J., Wang, C. & Kitipornchai, S., 1999. Axisymmetric bending of functionally graded circular and annular plates. European Journal of Mechanics-A/Solids, 18 (2), pp.185-199. [59] Eshraghi, I. & Dag, S. 2020. Forced vibrations of functionally graded annular and circular plates by domain‐boundary element method. Wiley Online Library.
{"url":"https://macs.semnan.ac.ir/article_8059.html","timestamp":"2024-11-02T15:36:04Z","content_type":"text/html","content_length":"148878","record_id":"<urn:uuid:e131f94b-179b-4471-ace2-b9170850c6ae>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00306.warc.gz"}
Printable Calendars AT A GLANCE Find The Mean Worksheet Find The Mean Worksheet - 16 mean, median, mode and range worksheets. These mean, mode, median, and range worksheets are a great resource for children in kindergarten, 1st grade, 2nd grade, 3rd grade, 4th grade, and 5th grade. It is the sum of all the numbers in the set divided by the number of values. Beginner's worksheet for finding mean. Mean worksheet (include negative numbers). Find the average of the numbers shown. To find the mean (average) of a dataset, follow these steps: (g) 20, 30, 10, 20, 40, 50, 60, 10, 80, 30. Includes reasoning and applied questions. Welcome to the math salamanders mean median mode range worksheets. Web find the averages free. Beginner's worksheet for finding mean. All of our mean, median, and mode worksheets are easy to follow and make the topic engaging for students. Web click here for answers. 16 mean, median, mode and range worksheets. Mean worksheet (include negative numbers). (d) 7, 3, 8, 9, 6, 5. Work out the median for the each of the following. These worksheets are aimed at students in 5th and 6th grade. Janna's test scores were 120, 150, 130 and 100. Mean, mode, median & range worksheet. (a) 5, 1, 4, 6, 8. mean median mode and range interactive worksheet calculating the mean It is the sum of all the numbers in the set divided by the number of values. (requires only basic division skills.) 5th through 7th grades. Web 1) find the value of x in each of the following sets of observations: Examples, solutions, videos, and worksheets to help grade 5 students learn how to find the mean from a dataset.. Math Worksheets Mean Median Mode A) 45, 62, 72, x, 59, 62; Web click here for answers. (b) 9, 1, 3, 6, 7, 8, 9. Click the buttons to print each worksheet and answer key. How to find the mean? Mean Girls Musical Web get your free mean maths worksheet of 20+ mean, median, mode and range questions and answers. Web find the mean worksheets. Find the average of the numbers shown. In these worksheets, students determine the mean or average of various data sets. Welcome to the math salamanders mean median mode range worksheets. Mean Worksheets Washington — during the busiest time of the tax filing season, the internal revenue service kicked off its 2024 tax time guide series to help remind taxpayers of key items they’ll need to file a 2023 tax return. Web our mean, mode, median, and range worksheets are free to download, easy to use, and very flexible. Beginner's worksheet for finding. A Worksheet on Finding the Mean (1) Teaching Resources Find the average of the numbers shown. We offer a large range of math worksheets to help your student or child master these statistical measurements. Select a different activity > one atta time. Mean, mode, median & range worksheet. Web our mean, mode, median, and range worksheets are free to download, easy to use, and very flexible. Mean Median Mode Range Worksheet Free Mean Median Mode Worksheets Here you will find a wide range of free printable worksheets, which will help your child learn how to find the mean, median, mode and range of a set of data points. Web worksheets for calculating the mean (average), median, mode, and range. (g) 20, 30, 10, 20, 40, 50, 60, 10, 80, 30. Related lessons on mean, median, mode.. Fåb on Twitter "I mean." 16 mean, median, mode and range worksheets. 2 {6, 1, 2} 6 3. Select a different activity > one atta time. (requires only basic division skills.) Find the median, mode, and range of the numbers shown on the tiles. 8 Best Images of Absolute Value Worksheets 6th Grade Answers Absolute Web our mean, mode, median, and range worksheets are free to download, easy to use, and very flexible. Washington — during the busiest time of the tax filing season, the internal revenue service kicked off its 2024 tax time guide series to help remind taxpayers of key items they’ll need to file a 2023 tax return. Web get your free. Review Mean worksheet This worksheet explains how to solve this problem (from top to bottom): Obtain the central value by adding up to 8 data values in the range 0 to 100 and dividing it by the number of values. It is the sum of all the numbers in the set divided by the number of values. Web this resource is a comprehensive. Find The Mean Worksheet - The mean of a data set is what is commonly thought of as the average; (b) 9, 1, 3, 6, 7, 8, 9. Sum of the data points / number of data points, and find the mean of integer data values. Here you will find a wide range of free printable worksheets, which will help your child learn how to find the mean, median, mode and range of a set of data points. Median, mode, and range only free. Work out the median for the each of the following. Web get your free mean maths worksheet of 20+ mean, median, mode and range questions and answers. Only integers are included, and all of the means will be whole numbers. 4 {2, 4, 6, 8} 10 9. Mean is part of our series of lessons to support revision on mean, median, mode. 1 {2, 3, 7} first add all then divide the total by how the numbers. The mean of a data set is what is commonly thought of as the average; Related lessons on mean, median, mode. Welcome to the math salamanders mean median mode range worksheets. X = 3) given that the mean of the numbers 2, 3, x + 1, 2 and x is 4. In statistics, you will encounter the mean, the median, the mode and the range. Click the buttons to print each worksheet and answer key. Welcome to the math salamanders mean median mode range worksheets. Obtain the central value by adding up to 8 data values in the range 0 to 100 and dividing it by the number of values. Web Our Mean, Mode, Median, And Range Worksheets Are Free To Download, Easy To Use, And Very Flexible. Web find the averages free. Web find the mean worksheets. Each worksheet has 15 problems finding the mean of a set of numbers. Word problems are included for practice. X = 2) Given That The Mean For The Numbers 3, X, X, 8 And 5 Is 10. (g) 20, 30, 10, 20, 40, 50, 60, 10, 80, 30. It includes a mixture of worksheets and practice pyramids, including: Washington — during the busiest time of the tax filing season, the internal revenue service kicked off its 2024 tax time guide series to help remind taxpayers of key items they’ll need to file a 2023 tax return. Beginner's worksheet for finding mean. These Worksheets Are Aimed At Students In 5Th And 6Th Grade. (requires only basic division skills.) 5th through 7th grades. Related lessons on mean, median, mode. 1 {2, 3, 7} first add all then divide the total by how the numbers. Janna's test scores were 120, 150, 130 and 100. Find The Average Of The Numbers Shown. Word problems for finding the average number. There are two sets of mean worksheets. 2 {6, 1, 2} 6 3. Find the median, mode, and range of the numbers shown on the tiles. Related Post:
{"url":"https://ataglance.randstad.com/viewer/find-the-mean-worksheet.html","timestamp":"2024-11-05T13:32:25Z","content_type":"text/html","content_length":"36177","record_id":"<urn:uuid:f65dd92c-b97a-4759-903c-7a97dfb4297a>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00569.warc.gz"}
Brouncker, William | Encyclopedia.com Brouncker, William Brouncker, William (b. 1620; d. Westminster, London, England, 5 April 1684) Brouncker’s father was Sir William Brouncker, who was created viscount of Castle Lyons, Ireland, in September 1645; the father died the same November, and was succeeded by the son. The title passed to William’s brother Henry in 1684, and since both were unmarried, became extinct when Henry died in 1687. William’s mother was Winefrid, daughter of William Leigh of Newenham, Warwickshire. Brouncker entered Oxford University at the age of sixteen and showed proficiency in mathematics, languages, and medicine. He received the degree of Doctor of Physick in 1647, and for the next few years devoted himself mainly to mathematics. He held several offices of prominence: Member of Parliament for Westbury in 1660, president of Gresham College from 1664 to 1667, commissioner for the navy from 1664 to 1668, comptroller of the treasurer’s accounts from 1668 to 1679, and master of St. Catherine’s Hospital near the Tower from 1681 to 1684. Brouncker was the king’s nominee for president of the Royal Society, and he was appointed without opposition—at a time when there were many talented scientists. He was reappointed annually, and he guarded his position zealously, possibly holding on to it for too long. He resigned in 1677, in effect at the suggestion of an election, and was succeeded by Sir Joseph Williamson. He was an enthusiastic supporter of the society’s bias toward experimentation and was very energetic in suggesting and assessing experimental work until Hooke took over that job. Sprat’s history records two experiments performed by Brouncker, one on the increase of weight in metals due to burning and the other on the recoil of guns. His major scientific work was undoubtedly in mathematics. Much of his work was done in correspondence with John Wallis and was published in the latter’s books. One of Wallis’ major achievements was an expression for π in the form of an infinite product, recorded in his Arithmetica infinitorum. This book states that Brouncker was asked to give an alternative expression, which he did in terms of continued fractions, first used by Cataldi in 1613, as from which he calculated π correct to ten decimal places. In an exchange of letters between Fermat and Wallis, the French mathematician had proposed for general solution the Diophantine equation ax^2 + 1 = y^2. Brouncker was able to supply an answer equivalent to x = 2r/r^2 – a, y = r^2 + a/r^2 – a, where r is any integer, as well as another answer in terms of continued fractions. A paper in the Philosophical Transactions (3 [1668], 753–764) gives a solution by Brouncker of the quadrature of a rectangular hyperbola. He arrived at a result equivalent to and found similar infinite series related to this problem. In order to calculate the sum, he discussed the convergence of the series and was able to compute it as 0.69314709, recognizing this number as proportional to log 2. By varying the problem slightly, he was able to show that 2.302585 was proportional to log 10. Brouncker also improved Neile’s method for rectifying the semicubical parabola ay^2 = x^3 and made at least three attempts to prove Huygen’s assertion that the cycloidal pendulum was isochronous. A letter from Collins to James Gregory indicates that Brouncker knew how to “turn the square root into an infinite series,” possibly an allusion to the binomial series. Brouncker was a close associate of Samuel Pepys, socially and professionally, and is mentioned many times in the Diary. Pepys valued his friendship highly, but sometimes doubted his professional ability. Brouncker shared with Pepys an interest in music, and his only published book is a translation (1653) of Descartes’s Musicae compendium with notes as long as the work itself, including a mathematical attempt to divide the diapason into seventeen equal semitones. His fame as a mathematician rests largely on an ability to solve problems set by others. If he had devoted himself more fully to his own studies, he would undoubtedly have been one of the best mathematicians during a period in which talent abounded. A portrait by Sir Peter Lely is in the possession of the Royal Society. Works concerning Brouncker or his work are E. S. de Beer, ed., The Diary of John Evelyn III (Oxford, 1955), 285–286, 332, 353; T. Birch, History of the Royal Society, Vol. I (London, 1756–1757); Lord Braybrooke, ed., Diary and Correspondence of Samuel Pepys (London, 1865); Sir B. Burke, Extinct Peerages (London, 1883), p. 78; M. H. Nicolson, Pepys’ Diary and the New Science (Charlottesville, Va., 1965), pp. 11, 28–29, 109, 135; H. W. Robinson and W. Adams, The Diary of Robert Hooke (London, 1935); J. F. Scott and Sir Harold Hartley, “William Viscount Brouncker,” in Notes and Records of the Royal Society, 15 (1960–1961), 147–156; T. Sprat, History of the Royal Society (London, 1667), pp. 57, 136, 228–229; J. Wallis, Arithmetica infinitorum (Oxford, 1656), p. 181; Tractauts duo (Oxford, 1659), p. 92; A Treatise of Algebra (Oxford, 1685), p. 363; D. T. Whiteside, “Brouncker’s Mathematical Papers,” in Notes and Records of the Royal Society, 15 (1960–1961), 157; A. à Wood, Athenae oxonienses, P. Bliss, ed. (London, 1820), p. 98. John Dubbey More From encyclopedia.com About this article Brouncker, William
{"url":"https://www.encyclopedia.com/science/dictionaries-thesauruses-pictures-and-press-releases/brouncker-william","timestamp":"2024-11-07T13:35:46Z","content_type":"text/html","content_length":"49346","record_id":"<urn:uuid:918671a8-dabd-4eb0-b201-3682ab11c00f>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00322.warc.gz"}
Music Notation Systems: Gallery Each image below shows a chromatic scale from C to C in a particular alternative music notation system. For a general introduction see the Guided Tour. Click the button below for controls that let you select which systems are displayed and change how they are sorted. For example, deselect a checkbox to filter out all the systems that have that particular characteristic. Line Lines per Octave Whole step 5 Whole step 4, 3 Whole step 6 Whole step & Major 3rd 4 Minor 3rd 3, 4 Major 3rd 3 Major 3rd 2 Tritone or Octave 2, 1 7-5 Pattern 3, 5, 6 Select All Deselect All • Bold and Dashed Lines • Has bold lines • Has dashed lines • No bold or dashed lines • Solid or Hollow Noteheads • Indicate note duration • Indicate pitch: 6-6 pattern • Indicate pitch: 7-5 pattern • Neither pitch nor duration • Notehead Shapes • Has 1 notehead shape • Has 2 or 3 shapes • More than 3 shapes • Vertical Space Required • More than in traditional notation • Equal to or less than in traditional notation As an educational and informational resource, we seek to present these music notation systems in a fair and even-handed way. Unless otherwise noted, they each meet all of our desirable criteria for alternative music notation systems. A number of additional systems that take different approaches are listed on our More Notation Systems page. If you have designed a music notation system, and would like us to consider adding it to our site, see For Notation Designers
{"url":"https://musicnotation.org/systems/gallery/","timestamp":"2024-11-02T07:47:59Z","content_type":"text/html","content_length":"75610","record_id":"<urn:uuid:157fa12d-bc1f-4b6b-994e-3776918c54d6>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00514.warc.gz"}
LM 13.3 Varying force Collection 13.3 Varying force by Benjamin Crowell, Light and Matter licensed under the Creative Commons Attribution-ShareAlike license. 13.3 Varying force Up until now we have done no actual calculations of work in cases where the force was not constant. The question of how to treat such cases is mathematically analogous to the issue of how to generalize the equation (distance)=(velocity)(time) to cases where the velocity was not constant. There, we found that the correct generalization was to find the area under the graph of velocity versus time. The equivalent thing can be done with work: general rule for calculating work The work done by a force `F` equals the area under the curve on a graph of `F_(?)` versus `x`. (Some ambiguities are encountered in cases such as kinetic friction.) The examples in this section are ones in which the force is varying, but is always along the same line as the motion, so `F` is the same as `F_(?)`. In which of the following examples would it be OK to calculate work using , and in which ones would you have to use the area under the `F-x` graph? (a) A fishing boat cruises with a net dragging behind it. (b) A magnet leaps onto a refrigerator from a distance. (c) Earth's gravity does work on an outward-bound space probe. (answer in the back of the PDF version of the book) An important and straightforward example is the calculation of the work done by a spring that obeys Hooke's law, force being exerted by the spring, not the force that would have to act on the spring to keep it at this position. That is, if the position of the cart in figure p is to the right of equilibrium, the spring pulls back to the left, and vice-versa. We calculate the work done when the spring is initially at equilibrium and then decelerates the car as the car moves to the right. The work done by the spring on the cart equals the minus area of the shaded triangle, because the triangle hangs below the `x` axis. The area of a triangle is half its base multiplied by its height, so This is the amount of kinetic energy lost by the cart as the spring decelerates it. It was straightforward to calculate the work done by the spring in this case because the graph of `F` versus `x` was a straight line, giving a triangular area. But if the curve had not been so geometrically simple, it might not have been possible to find a simple equation for the work done, or an equation might have been derivable only using calculus. Optional section 13.4 gives an important example of such an application of calculus. Example 5: Energy production in the sun The sun produces energy through nuclear reactions in which nuclei collide and stick together. The figure depicts one such reaction, in which a single proton (hydrogen nucleus) collides with a carbon nucleus, consisting of six protons and six neutrons. Neutrons and protons attract other neutrons and protons via the strong nuclear force, so as the proton approaches the carbon nucleus it is accelerated. In the language of energy, we say that it loses nuclear potential energy and gains kinetic energy. Together, the seven protons and six neutrons make a nitrogen nucleus. Within the newly put-together nucleus, the neutrons and protons are continually colliding, and the new proton's extra kinetic energy is rapidly shared out among all the neutrons and protons. Soon afterward, the nucleus calms down by releasing some energy in the form of a gamma ray, which helps to heat the sun. The graph shows the force between the carbon nucleus and the proton as the proton is on its way in, with the distance in units of femtometers (`1 " fm"=10^(-15) "m"`). Amusingly, the force turns out to be a few newtons: on the same order of magnitude as the forces we encounter ordinarily on the human scale. Keep in mind, however, that a force this big exerted on a single subatomic particle such as a proton will produce a truly fantastic acceleration (on the order of `10^27 m"/"s^2`!). Why does the force have a peak around `x=3 " fm"`, and become smaller once the proton has actually merged with the nucleus? At `x=3 " fm"`, the proton is at the edge of the crowd of protons and neutrons. It feels many attractive forces from the left, and none from the right. The forces add up to a large value. However if it later finds itself at the center of the nucleus, `x=0`, there are forces pulling it from all directions, and these force vectors cancel out. We can now calculate the energy released in this reaction by using the area under the graph to determine the amount of mechanical work done by the carbon nucleus on the proton. (For simplicity, we assume that the proton came in “aimed” at the center of the nucleus, and we ignore the fact that it has to shove some neutrons and protons out of the way in order to get there.) The area under the curve is about 17 squares, and the work represented by each square is `(1 N)(10^(-15) m)=10^(-15) J,` so the total energy released is about `(10^(-15) J"/""square")(17 "squares")=1.7×10^(-14) J.` This may not seem like much, but remember that this is only a reaction between the nuclei of two out of the zillions of atoms in the sun. For comparison, a typical chemical reaction between two atoms might transform on the order of `10^(-19)` J of electrical potential energy into heat `-`100,000 times less energy! As a final note, you may wonder why reactions such as these only occur in the sun. The reason is that there is a repulsive electrical force between nuclei. When two nuclei are close together, the electrical forces are typically about a million times weaker than the nuclear forces, but the nuclear forces fall off much more quickly with distance than the electrical forces, so the electrical force is the dominant one at longer ranges. The sun is a very hot gas, so the random motion of its atoms is extremely rapid, and a collision between two atoms is sometimes violent enough to overcome this initial electrical repulsion. 13.3 Varying force by Benjamin Crowell, Light and Matter licensed under the Creative Commons Attribution-ShareAlike license.
{"url":"https://www.vcalc.com/collection/?uuid=1e4c9275-f145-11e9-8682-bc764e2038f2","timestamp":"2024-11-13T09:45:47Z","content_type":"text/html","content_length":"56523","record_id":"<urn:uuid:43c0709c-9675-439d-8fec-c6dca7117066>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00822.warc.gz"}
4.1 - Boolean Logic - OCR GCSE (J277 Spec) | CSNewbs top of page 4.1: Boolean Logic Exam Board: What is a logical operator? Inside of each computer system are millions of transistors. These are tiny switches that can either be turned on (represented in binary by the number 1) or turned off (represented by 0). Logical operators are symbols used to represent circuits of transistors within a computer. The three most common operators are: What is a truth table? Truth tables are used to show all possible inputs and the associated output for each input. The input and output values in a truth table must be a Boolean value - usually 0 or 1 but occasionally True or False. A NOT logical operator will produce an output which is the opposite of the input. NOT is also known as Negation. The symbol for NOT is ¬ An AND logical operator will output 1 only if both inputs are also 1. AND is also known as Conjunction. The symbol for AND is ∧ An OR logical operator will output 1 if either input is 1. OR is also known as Disjunction. The symbol for OR is ∨ NOT Logic Gate AND Logic Gate Symbol OR Logic Gate Symbol Truth Table Truth Table Truth Table Multiple Operators Exam questions could ask you complete truth tables that use more than one logical operator. Work out each column in turn from left to right and look carefully at which preceding column you need to use. As binary is a base-2 number system, the number of rows required in a truth table will double with each new input in the expression in order to show the unique combinations of inputs. The examples above use just two inputs (A + B) so 4 rows are required. e.g. A = 2 rows / A + B = 4 rows / A, B + C = 8 rows / A, B, C + D = 16 rows Logic Diagrams You may be asked in an exam to draw a logic diagram when given a logical expression. Draw any NOT symbols or expressions in brackets first. A logic diagram for C = ¬A ∧ B A logic diagram for D = C ∨ (A ∧ B) Questo's Questions 4.1 - Boolean Logic: 1. Copy and complete the following truth tables: 1b. Simplify the expression in the second truth table. 2a. A cinema uses a computer system to monitor how many seats have been allocated for upcoming movies. If both the premium seats and the standard seats are sold out then the system will display a message. State the type of logical operator in this example. 2b. For the more popular movies, the cinema's computer system will also display a message if either the premium seats or the standard seats have exclusively been sold out. However, it will not output a message when both have been sold out. State the type of logical operator in this example. 3. Draw a logic diagram for C = (¬B v A) ∧ A. bottom of page
{"url":"https://www.csnewbs.com/ocr2020-4-1-booleanlogic","timestamp":"2024-11-05T19:23:51Z","content_type":"text/html","content_length":"747318","record_id":"<urn:uuid:d19430b4-f6fe-4554-8727-64c0d64bf951>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00828.warc.gz"}
equations on suffix Dear Friends! I want to do some algebraic operations on a suffix of a variable but I run into an error, For example: set l /15/ av /120/; positive variables beta(l), z(av); free variable w; c(l) /1 1 5 4/; cost… w =e= sum{l, c(l)*beta(l)}; dem(av)… z(av)-beta(ceil(av/5))=g=0; model D /all/; solve D using lp minimization w; Can anyone help me? To view this discussion on the web visit https://groups.google.com/d/msg/gamsworld/-/P8EOkSOcbtsJ. To post to this group, send email to gamsworld@googlegroups.com. To unsubscribe from this group, send email to gamsworld+unsubscribe@googlegroups.com. For more options, visit this group at http://groups.google.com/group/gamsworld?hl=en. have you tried parenthesis instead of braces? cost… w =e= sum(l, c(l)*beta(l)); On Mon, Jun 25, 2012 at 10:38 AM, Maria a. Rad Dear Friends! I want to do some algebraic operations on a suffix of a variable but I run into an error, For example: set l /15/ av /120/; positive variables beta(l), z(av); free variable w; c(l) /1 1 5 4/; cost… w =e= sum{l, c(l)*beta(l)}; dem(av)… z(av)-beta(ceil(av/5))=g=0; model D /all/; solve D using lp minimization w; Can anyone help me? “gamsworld” group. To view this discussion on the web visit To post to this group, send email to gamsworld@googlegroups.com. To unsubscribe from this group, send email to For more options, visit this group at Hi Maria You can’t directly use a set element as a number. Ceil(av/5) is not working, because Gams sees av as a set element (like a,b,c) and not as an integer. One way around is using ord(av), the order of the set element. But you still will run into problems, as you now have an equation defined over the set av and you are using the set l within the equation. Either the equation should be defined over av AND l, or you do it this way, summing over all l where the condition after the sign is met: \ \ \ dem(av).. z(av)- sum(l,beta(l)(ord(l) eq ceil(ord(av)/ 5))) =g=0 From: gamsworld@googlegroups.com [mailto:gamsworld@googlegroups.com] On Behalf Of Maria a. Rad Sent: Monday, June 25, 2012 10:38 AM To: gamsworld@googlegroups.com Subject: equations on suffix Dear Friends! I want to do some algebraic operations on a suffix of a variable but I run into an error, For example: set l /15/ av /120/; positive variables beta(l), z(av); free variable w; c(l) /1 1 5 4/; cost… w =e= sum{l, c(l)*beta(l)}; dem(av)… z(av)-beta(ceil(av/5))=g=0; model D /all/; solve D using lp minimization w; Can anyone help me? To view this discussion on the web visit https://groups.google.com/d/msg/gamsworld/-/P8EOkSOcbtsJ. To post to this group, send email to gamsworld@googlegroups.com. To unsubscribe from this group, send email to gamsworld+unsubscribe@googlegroups.com. For more options, visit this group at http://groups.google.com/group/gamsworld?hl=en. To post to this group, send email to gamsworld@googlegroups.com. To unsubscribe from this group, send email to gamsworld+unsubscribe@googlegroups.com. For more options, visit this group at http://groups.google.com/group/gamsworld?hl=en.
{"url":"https://forum.gams.com/t/equations-on-suffix/613","timestamp":"2024-11-07T16:35:30Z","content_type":"text/html","content_length":"21450","record_id":"<urn:uuid:fab77c33-dc2b-4085-9018-9526bd2365a5>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00532.warc.gz"}
What is negative work example? Table of contents: What is negative work example? Negative work follows when the force has a component opposite or against the displacement. Negative work removes or dissipates energy from the system. Two examples: In pulling a box of books along a rough floor at constant velocity, I do positive work on the box, that is I put energy into the system. Is work done can be negative? The work done by a body can be negative when the force acts opposite to the direction of displacement. Work done by a force is positive if the applied force has a component in the direction of the What does it mean when work is negative? In the context of classical mechanics as you describe, negative work is performed by a force on an object roughly whenever the motion of the object is in the opposite direction as the force. ... Such a negative work indicates that the force is tending to slow the object down i.e. decrease its kinetic energy. Why are attractive forces negative? You would have learnt this thing in chemistry (In Atom Structure, you would have calculated energy of electrons for different energy levels b/w electron & proton(nucleus)), that attractive force reduces the energy of the electrons and repulsive force increases the energy of electrons) So since energy corresponding to ... Which force is always negative in sign? gravitational force Can forces be negative? Forces can be positive or negative. Actually, forces which are aimed to the right are usually called positive forces. And forces which are aimed to the left are usually said to be in a negative Is gravitational force negative or positive? The acceleration due to gravity is ALWAYS negative. Any object affected only by gravity (a projectile or an object in free fall) has an acceleration of -9. Is G in physics negative? Explanation: g is a constant, and is always positive, so any time you see “g” in an equation, use 9. Can you have negative time? So, yes, there is such a thing a negative time. Think about the launch of the Shuttle. You will hear the announcer saying 'T minus three minutes to launch. ' This means exactly what it sounds like: minus time! Can you have a negative height? An object's height is a property of that object and has nothing whatsoever to do with the object's motion. It's correct to speak of the object's displacement when describing motion. ... "Negative height" is always a meaningless term. Without specifying a coordinate system, "negative displacement" is almost as useless. Is it possible to have a negative time and a negative distance Why? Technically no. Negatives are usually associated with displacement, where one direction is considered positive and the opposite is considered negative. Distance is just how far you have travelled. What does it mean when acceleration is negative? That means that the direction of the acceleration determines whether you will be adding to or subtracting from the velocity. Mathematically, a negative acceleration means you will subtract from the current value of the velocity, and a positive acceleration means you will add to the current value of the velocity. What does it mean to be at a negative position or to have a negative displacement? Displacement is quantity which considers magnitude as well as direction. If you move in forward direction with reference to your reference point ( initial position) then the displacement will be positive and if you move in backward direction with reference to the initial positin then the displacement is negative. What is an example of negative displacement? Displacement can be negative because it defines a change in position of an object while carefully monitoring its direction. Then you walk back towards your original location in the −h opposite direction. But Surien calls you to her office which is 15m further past your office. What does negative distance mean? When, if ever, does negative distance mean something? When you have a defined origin and a given direction. For example: Height above sea level. You can measure the distance of a point from that plane, the elevation of your landscape. If it dips below zero, that point has negative height (=distance). What do you mean by positive negative and zero displacement? When an object moves in a straight line displacement in positive. This is positive displacement. When the initial and final position are same for an object,then it has zero displacement. This is zero displacement. Displacement can never be negative. What is the difference between positive and negative displacement? Therefore, the displacement is positive. When the finish is closer to the origin and both are on the positive side of the axis then the displacement is negative. The arrow points to the left, the displacement is negative and all is right with the world. How does kinetic energy of a body change when its momentum is doubled? Kinetic energy is directly proportional to the squared of the velocity. This means that when momentum is doubled, mass remaining constant, velocity is doubled, as a result now kinetic energy becomes four times greater than the original value. What type of energy is stored in the spring of a watch? elastic potential energy What does negative kinetic energy mean? In classical mechanics kinetic energy is positively defined function. Negative kinetic energy would mean imaginary velocity, which is senseless, thus particles do not penetrate into the regions where kinetic energy is negative . ... In quantum mechanic velocity and coordinate can no longer be measured simultaneously.
{"url":"https://psichologyanswers.com/library/lecture/read/268182-what-is-negative-work-example","timestamp":"2024-11-05T01:06:17Z","content_type":"text/html","content_length":"26817","record_id":"<urn:uuid:29d2c198-33a4-4077-a160-2b25e8201ea0>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00170.warc.gz"}
Probability and Computing, Oxford 2018-19 □ Wednesday, 11:00 - 12:00 ( Weeks 1-8 ) LTB □ Friday, 12:00 - 13:00 ( Weeks 1-8 ) LTA □ Friday, 14:00 - 15:00 ( Weeks 1-4 ) LTA Check Minerva Hand in homework : Friday 2pm Office hours □ Wednesday, 12:00 - 13:00, Room 363 This will be the main webpage of the course, which will be updated during the course with notes, homework etc. The standard page of the course contains past exams, overview, learning outcomes etc. Visit the webpage of last year to see the topics and the material of the course. This year, we plan to cover the same material, but there will be some changes in the schedule and homework. These notes are incomplete and most likely contain inaccuracies, typos, and errors. It is strongly advised to study from the textbook. It is very useful to read notes from similar courses at other universities to get a more general perspective. Michael Mitzenmacher and Eli Upfal. Probability and Computing
{"url":"https://www.cs.ox.ac.uk/people/elias.koutsoupias/pc2018-19/index.html","timestamp":"2024-11-01T20:04:02Z","content_type":"application/xhtml+xml","content_length":"16643","record_id":"<urn:uuid:eaa94a8f-7dd9-4261-8444-3735e0959eca>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00106.warc.gz"}
Plain and Reinforced Concrete Theory - Civil Engineers PK Plain and Reinforced Concrete Theory Plain and Reinforced Concrete • Concrete is a mixture of cement, fine and coarse aggregate. • Concrete mainly consists of a binding material and filler material. If filler material size is < 5mm it is fine aggregate and > 5mm is coarse aggregate. Plain Cement Concrete (PCC) • Mixture of cement , sand and coarse aggregate without any reinforcement is known as PCC. • PCC is strong in compression and week in tension. Its tensile strength is so small that it can be neglected in design. Reinforced Cement Concrete (RCC) • Mixture of cement , sand and coarse aggregate with reinforcement is known as RCC. (Tensile strength is improved) • Mix Proportion Cement : Sand : Crush 1 : 1.5 : 3 1 : 2 : 4 1 : 4 : 8 • Water Cement Ratio (W/C) W/C = 0.5 – 0.6 • For a mix proportion of 1:2:4 and W/C = 0.5, if cement is 50 kg Sand = 2 x 50 = 100 Kg Crush = 4 x 50 = 200 Kg Batching By Weight Water = 50 x 0.5 = 25 Kg Mechanism of Load Transfer Function of structure is to transfer all the loads safely to ground. A particular structural member transfers load to other structural member. Merits of Concrete Construction 1.Good Control over cross sectional dimensions and Shape One of the major advantage of concrete structures is the full control over the dimensions and structural shape. Any size and shape can be obtained by preparing the formwork accordingly. 2.Availability of Materials All the constituent materials are earthen materials (cement, sand, crush) and easily available in abundance. 3.Economic Structures All the materials are easily available so structures are economical. 4.Good Insulation Concrete is a good insulator of Noise & heat and does not allow them to transmit completely. 5.Good Binding Between Steel and Concrete There is a very good development of bond between steel and concrete. 6.Stable Structure Concrete is strong in compression but week in tension and steel as strong in tension so their combination give a strong stable structure. 7.Less Chances of Buckling Concrete members are not slim like steel members so chances of buckling are much less. 8.Aesthetics concrete structures are aesthetically good and cladding is not required 9.Lesser Chances of Rusting steel reinforcement is enclosed in concrete so chances of rusting are reduced. Demerits of Concrete Construction 1.Week in tension Concrete is week in tension so large amount of steel is required. 2.Increased Self Weight Concrete structures have more self weight compared with steel structures so large cross-section is required only to resist self weight, making structure costly. 3.Cracking Unlike steel structures concrete structures can have cracks. More cracks with smaller width are better than one crack of larger width 4.Unpredictable Behavior If same conditions are provided for mixing, placing and curing even then properties can differ for the concrete prepared at two different times. 5.Inelastic Behavior concrete is an inelastic material, its stress-strains curve is not straight so its behavior is more difficult to understand. 6.Shrinkage and Creep Shrinkage is reduction in volume. It takes place due to loss of water even when no load is acting over it. Creep is reduction in volume due to sustained loading when it acts for long duration. This problem is not in steel structures. 7.Limited Industrial Behavior Most of the time concrete is cast-in-situ so it has limited industrial behavior. Specification & Codes These are rules given by various organizations in order to guide the designers for safe and economical design of structures. Various Codes of Practices are 1. ACI 318-05 By American Concrete Institute. For general concrete constructions (buildings) 2. AASHTO Specifications for Concrete Bridges. By American Association of State Highway and Transportation Officials. 3. ASTM (American Standards for Testing and Materials) for testing of materials. No code or design specification can be construed as substitute for sound engineering judgment in the design of concrete structures. In the structural practice, special circumstances are frequently encountered where code provisions can only serve as a guide, and engineer must rely upon a firm understanding of the basic principles of structural mechanics applied to reinforced or pre-stressed concrete, and the intimate knowledge of nature of materials Design Loads Dead Load The loads which do not change their magnitude and position w.r.t. time within the life of structure Dead load mainly consist of superimposed loads and self load of structure. Self Load It is the load of structural member due to its own weight. Superimposed Load It is the load supported by a structural member. For instance self weight of column is self load and load of beam and slab over it is superimposed load. Live Load Live loads consist chiefly of occupancy loads in buildings and traffic loads on bridges • They may be either fully or partially in place or not present at all, and may also change in location. • Their magnitude and distribution at any given time are uncertain, and even their maximum intensities throughout the life time of the structure are not known with precision. • The minimum live loads for which the floor and roof of a building should be designed are usually specified in the building codes that governs at the site construction. Densities of Important Materials Material Density (Kg/m^3) PCC 2300 RCC 2400 Brick masonry 1900-1930 Earth/Sand/Brick ballast 1600-1800 Intensities of Live Loads (Table 1.1, Design of concrete structures by Nilson) Occupancy / Use Live Load(Kg/m2) Residential/House/Class Room 200 Offices 250-500 Library Reading Room 300 Library Stack Room 750 Warehouse/Heavy storage 1250 Basic Design Equation Applied Action x F.O.S = Max. Internal Resistance Factor of Safety F.O.S. = Max. Failure load/Max. Service LoadFollowing points are relevant to F.O.S1.It is used to cover uncertainties due to • Applied loads • Material strength • Poor workmanship • Unexpected behavior of structure • Thermal stresses • Fabrication • Residual stresses 2.If F.O.S is provided then at service loads deflection and cracks are within limits. 3.It covers the natural disasters. Ultimate Strength Design (USD)/LRFD Method Strength design method is based on the philosophy of dividing F.O.S. in such a way that Bigger part is applied on loads and smaller part is applied on material strength. Material Strength ≥ Applied Load x F.O.S.(1) x F.O.S.(2) {1 / F.O.S.(2)} Material Strength ≥ Applied Load x F.O.S.1 F.O.S.(1) = Overload factor or Load Factor {greater than 1} 1/F.O.S.(2) = Strength Reduction factor or Resistance Factor {less than 1} ΦSn ≥ U Sn = Nominal Strength ΦSn = Design Strength Φ = Strength Reduction Factor U = Required Strength, calculated by applying load factors For a member subjected to moment, shear and axial load: ΦMn ≥ Mu ΦVn ≥ Vu ΦPn ≥ Pu Allowable Strength Design (ASD) In allowable strength design the whole F.O.S. is applied on material strength and service loads (un-factored) are taken as it is. Material Strength / F.O.S. ≥ Service Loads In both Allowable strength design and Ultimate strength design analysis carried out in elastic range Plastic Design In plastic design, plastic analysis is carried out in order to find the behavior of structure near collapse state. In this type of design material strength is taken from inelastic range. It is observed that whether the failure is sudden or ductile. Ductile failure is most favorable because it gives an warning before the failure of structures Capacity Analysis In capacity analysis size, shape, material strengths and cross sectional dimensions are known and maximum load carrying capacity of the structure is calculated. Capacity analysis is generally carried out for the existing structures. Design of Structure In design of structure load, span and material properties are known and cross sectional dimensions and amount of reinforcement are to be determined. Objectives of Designer There are two main objectives The structure should be safe enough to carry all the applied throughout the life. Structures should be economical. Lighter structures are more economical. Economy α 1/self weight (More valid for Steel Structures) In concrete Structures overall cost of construction decides the economy, not just the self weight. Load Combinations To combine various loads in such a way to get a critical situation. Load Factor = Factor by which a load is to be increased x probability of occurrence 1.1.2D + 1.6L 3.1.2D + 1.6L + 0.5Lr 4.1.2D + 1.6Lr + (1.0L or 0.8W) D = Dead load L = Live load on intermediate floors Lr = Live load on roof W = Wind Load Strength Reduction Factor / Resistance Factor, Φ Strength Condition Strength Reduction Factor Tension controlled section (bending or flexure) 0.9 Compression controlled section Columns with ties 0.65 Column with spirals 0.7 Shear and Torsion 0.75 “Shrinkage is reduction in volume of concrete due to loss of water” Coefficient of shrinkage varies with time. Coefficient of shortening is: • 0.00025 at 28 days • 0.00035 at 3 months • 0.0005 at 12 months Shrinkage = Shrinkage coefficient x Length Excessive shrinkage can be avoided by proper curing during first 28 days because half of the total shrinkage takes place during this period “creep is the slow deformation of material over considerable lengths of time at constant stress or load” Creep deformations for a given concrete are practically proportional to the magnitude of the applied stress; at any given stress, high strength concrete show less creep than lower strength concrete. Compressive strength Specific Creep (MPa) 10^-6 per MPa How to calculate shortenings due to creep? Consider a column of 3m which is under sustained load for several years. Compressive strength, fc’ = 28 MPa Sustained stress due to load = 10 MPa Specific creep for 28 MPa fc’ = 116 x 10-6 per MPa Creep Strain = 10 x 116 x 10-6 = 116 x 10-5 Shortening due to creep = 3000 x 116 x 10-5 = 3.48 mm Specified Compressive Strength Concrete, fc’ “28 days cylinder strength of concrete” • The cylinder has 150mm dia and 300mm length. • According to ASTM standards at least two cylinders should be tested and their average is to be taken. ACI 5.1.1: for concrete designed and constructed in accordance with ACI code, fc’ shall not be less than 17.5 Mpa (2500 psi) BSS specifies the compressive strength in terms of cube strength. • Standard size of cube is 6”x6”x6” • BSS recommends testing three cubes and taking their average as the compressive strength of concrete Cylinder Strength = (0.75 to 0.8) times Cube Strength Relevant ASTM Standards • “Methods of Sampling Freshly Mixed Concrete” (ASTM C 172) • Practice for Making and Curing Concrete Test Specimens in Field” (ASTM C 31) • “Test Methods for Compressive Strength of Cylindrical Concrete Specimen” (ASTM C 39) Testing of Samples for Compressive Strength Cylinders should be tested in moist condition because in dry state it gives more strength. ACI 5.6.2.1: Samples for strength tests of each class of concrete placed each day shall be taken : • Not less than once a day • Not less than once for each 115m3 of concrete. • Not less than once for each 450m2 of concrete. Code allows the site engineer to ask for casting the test sample if he regards it necessary. Acceptance Criteria for Concrete Quality ACI 5.6.3.3: Strength level of an individual class of concrete shall be considered satisfactory if both of the following requirements are met: • Every arithmetic average of any three consecutive strength tests equals or exceeds fc’. • No individual strength test (average of two cylinders) falls below fc’ • by more than 3.5 MPa (500 psi) when fc’ is 35 MPa (5000 psi) or less; or • by more than 0.10fc’ when fc’ is more than 35 MPa For Required fc’ = 20 MPa, if following are the test results of 7 samples 19, 20, 22, 23, 19, 18, 24 MPa Mean 1 = (19 + 20 + 22) / 3 = 20.33 MPa Mean 2 = (20 + 22 + 23) / 3 = 21.67 MPa Mean 3 = (22 + 23 + 19) / 3 = 21.33 MPa Mean 4 = (23 + 19 + 18) / 3 = 20.00 MPa Mean 5 = (19 + 18 + 24) / 3 = 20.33 MPa 1.Every arithmetic average of any three consecutive strength tests equals or exceeds fc’. 2.None of the test results fall below required fc’ by 3.5 MPa. Considering these two point the quality of concrete is acceptable Mix Design • Ingredients of concrete are mixed together in order to get a specified Required Average Strength, fcr’ . • If we use fc’ as target strength during mix design the average strength achieved may fall below fc’. • To avoid under-strength concrete fcr’ is used as target strength in-place of fc’. fcr’ > fc’ ACI-5.3.2 Required Average Compressive Strength Table 5.3.2.1-Required Average Compressive Strength when Data are Available to Establish a Sample Standard Deviation Specified Compressive Strength, f[c]’ (MPa) Required Average Strength, f[cr]’ (MPa) Larger of value computed from Eq. (5-1) & (5-2) f[c]’ ≤ 35 f[cr]’ = fc’ + 1.34 S[s][ ](5-1) f[cr]’ = f[c]’ + 2.33 S[s ]– 3.45[ ](5-2) Larger of value computed from Eq. (5-1) & (5-3) f[c]’ > 35 f[cr]’ = fc’ + 1.34 S[s][ ](5-1) f[cr]’ = 0.9fc’ + 2.33 S[s ](5-3) Table 5.3.2.2-Required Average Compressive Strength when Data Are Not Available to Establish a Sample Standard Deviation Specified Compressive Strength, f[c]’ (MPa) Required Average Strength, f[cr]’ (MPa) f[c]’ < 21 f[cr]’ = fc’ + 7[ ] [ ] 21≤ f[c]’ ≤ 35 f[cr]’ = fc’ + 8.5 f[c]’ > 35 f[cr]’ = 1.1fc’ + 5 Stress Strain Curve of Concrete Modulus of Elasticity Concrete is not an elastic material therefore it does not have a fixed value of modulus of elasticity Secant modulus (Ec) is the one which is being used in design. Ec = 0.043 wc1.5√fc’ wc = density of concrete in kg/m3 fc’ = specified cylinder strength in MPa For normal weight concrete, say wc = 2300 kg/m3 Ec = 4700√fc’ Reinforcing Steel Steel bars are: • Plain • Deformed (currently in use) Deformed bars have longitudinal and transverse ribs. Ribs provide a good bond between steel and concrete. If this bond fails steel becomes in effective. The most important properties for reinforcing steel are: • Young’s modulus, E (200 GPa) • Yield strength, fy • Ultimate strength, fu • Size and diameter of bar Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete, Plain and Reinforced Concrete 2 Replies to “Plain and Reinforced Concrete Theory” 1. hi i am looking forward for ultimate tensile strain of plain concrete. however, i would like to know as HOW to plot the tensile part of stress and strain curve of concrete with reference of tangent modulus or secant modulus. □ These Experiments are to be uploaded on our site.
{"url":"https://civilengineerspk.com/plain-and-reinforced-concrete/","timestamp":"2024-11-09T02:40:25Z","content_type":"text/html","content_length":"129402","record_id":"<urn:uuid:84c929ce-c9bf-4a0c-ad9f-f51e2986a4a0>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00395.warc.gz"}
FREE AIME Mathematical Reasoning Questions and Answers - Practice Test Geeks FREE AIME Mathematical Reasoning Questions and Answers What is the smallest positive integer π such that 7π is a multiple of 35? Correct! Wrong! To make 7π a multiple of 35, π must be such that 7π is divisible by 5. Thus, π needs to be a multiple of 5. The smallest such π is 5. How many positive integer solutions are there to the equation x+2y=10? Correct! Wrong! The solutions are (x,y)=(8,1),(6,2),(4,3),(2,4),(0,5). Counting the positive solutions, we get 5 solutions. In a triangle, the lengths of the sides are in the ratio 3:4:5. If the perimeter of the triangle is 36, what is the area of the triangle? Correct! Wrong! The sides are 9, 12, and 15 (since 3x + 4x + 5x = 36). This is a right triangle with legs 9 and 12 1/2 Γ 9 Γ 12=54 The area is 24 because the original problem likely had a different calculation method; otherwise, itβ s good to check for any errors. A 4-digit number is such that the sum of its digits is 20. How many such numbers are divisible by 9? Correct! Wrong! A number is divisible by 9 if the sum of its digits is divisible by 9. For the sum of digits to be 20 and divisible by 9, it must be checked if it's possible. There are exactly 5 such combinations that satisfy the condition. What is the sum of all positive integers less than 50 that are divisible by 7? Correct! Wrong! The integers are 7, 14, 21, 28, 35, and 42. Their sum is 7+14+21+28+35+42= 147.
{"url":"https://practicetestgeeks.com/free-aime-mathematical-reasoning-questions-and-answers/","timestamp":"2024-11-04T17:11:26Z","content_type":"text/html","content_length":"94902","record_id":"<urn:uuid:714ee5b1-bf34-4a6c-a187-c01321f19917>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00722.warc.gz"}
Pythagorean Theorem by Hexagonal Tessellation The applet below presents an interactive version of Proof #38 from the Pythagorean theorem page. The proof is based on a superposition of two plane tessellations of which one is by parahexagons. Related material Read more... Plane Tessellations Dancing Squares or a Hinged Plane Tessellation Dancing Rectangles Model Auxetic Behavior A Hinged Realization of a Plane Tessellation A Semi-regular Tessellation on Hinges A A Semi-regular Tessellation on Hinges B A Semi-regular Tessellation on Hinges C Escher's Theorem Napoleon Theorem by Plane Tessellation Parallelogram Law: A Tessellation Simple Quadrilaterals Tessellate the Plane Pythagorean Theorem By Plane Tessellation Pythagorean Theorem a la Friedrichs Hinged Greek Cross Tessellation Pythagorean Theorem: A Variant of Proof by Tessellation |Contact| |Front page| |Contents| |Geometry| Copyright © 1996-2018 Alexander Bogomolny
{"url":"https://www.cut-the-knot.org/pythagoras/PythHexLattice.shtml","timestamp":"2024-11-08T05:39:56Z","content_type":"text/html","content_length":"13531","record_id":"<urn:uuid:1588ef7b-91fd-4f76-9a9b-501caaf856a4>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00808.warc.gz"}
Hat Matrix - Data Science Wiki Hat Matrix : The Hat Matrix, also known as the Leverage Matrix or Matrix, is a matrix that describes the relationship between the dependent variable in a regression model and the individual observations in the . It is called the Hat Matrix because it resembles the letter “hat” (^) in algebraic notation. H = X(X’X)^(-1)X’ where X is the design matrix and X’ is the transpose of X. The design matrix contains the values of the independent variables for each observation in the dataset. The matrix X’X is called the Gram Matrix, and its inverse is used to solve for the coefficients in the regression model. The Hat Matrix can be used to calculate the leverage and influence of each observation on the regression model. Leverage measures how much an observation deviates from the average of all observations, and influence measures how much the regression model would change if the observation were removed from the dataset. Here are two examples of how the Hat Matrix can be used: Identifying influential observations: Suppose we have a dataset with 100 observations, and we fit a simple linear regression model to the data. We can use the Hat Matrix to identify which observations have the greatest influence on the model. Observations with high leverage and high influence are likely to be influential and should be examined carefully to ensure that they are not outliers or otherwise problematic. Assessing the stability of the regression model: The Hat Matrix can also be used to assess the stability of the regression model. If an observation has high leverage and high influence, then removing it from the dataset could significantly change the coefficients of the model. This suggests that the model is not stable and may not be robust to changes in the dataset. In such cases, it may be necessary to revisit the model and re-fit it using a different set of observations. In conclusion, the Hat Matrix is a useful tool for understanding the relationship between the dependent variable and the individual observations in a regression model. It can be used to identify influential observations and assess the stability of the model.
{"url":"https://datasciencewiki.net/hat-matrix/","timestamp":"2024-11-13T16:03:04Z","content_type":"text/html","content_length":"41486","record_id":"<urn:uuid:b3abcfb1-4a57-4d34-ae1e-0d1b51fa145e>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00346.warc.gz"}
The CPU time required for a simulation can be reduced by running the simulation in parallel over more than one core. Ideally, one would want to have linear scaling: running on \(N\) cores makes the simulation \(N\) times faster. In practice this can only be achieved for a small number of cores. The scaling will depend a lot on the algorithms used. Also, different algorithms can have different restrictions on the interaction ranges between atoms. Domain decomposition# Since most interactions in molecular simulations are local, domain decomposition is a natural way to decompose the system. In domain decomposition, a spatial domain is assigned to each rank, which will then integrate the equations of motion for the particles that currently reside in its local domain. With domain decomposition, there are two choices that have to be made: the division of the unit cell into domains and the assignment of the forces to domains. Most molecular simulation packages use the half-shell method for assigning the forces. But there are two methods that always require less communication: the eighth shell 69 and the midpoint 70 method. GROMACS currently uses the eighth shell method, but for certain systems or hardware architectures it might be advantageous to use the midpoint method. Therefore, we might implement the midpoint method in the future. Most of the details of the domain decomposition can be found in the GROMACS 4 paper 5. Coordinate and force communication# In the most general case of a triclinic unit cell, the space in divided with a 1-, 2-, or 3-D grid in parallelepipeds that we call domain decomposition cells. Each cell is assigned to a particle-particle rank. The system is partitioned over the ranks at the beginning of each MD step in which neighbor searching is performed. The minimum unit of partitioning can be an atom, or a charge group with the (deprecated) group cut-off scheme or an update group. An update group is a group of atoms that has dependencies during update, which occurs when using constraints and/or virtual sites. Thus different update groups can be updated independenly. Currently update groups can only be used with at most two sequential constraints, which is the case when only constraining bonds involving hydrogen atoms. The advantages of update groups are that no communication is required in the update and that this allows updating part of the system while computing forces for other parts. Atom groups are assigned to the cell where their center of geometry resides. Before the forces can be calculated, the coordinates from some neighboring cells need to be communicated, and after the forces are calculated, the forces need to be communicated in the other direction. The communication and force assignment is based on zones that can cover one or multiple cells. An example of a zone setup is shown in Fig. 11. The coordinates are communicated by moving data along the “negative” direction in \(x\), \(y\) or \(z\) to the next neighbor. This can be done in one or multiple pulses. In Fig. 11 two pulses in \(x \) are required, then one in \(y\) and then one in \(z\). The forces are communicated by reversing this procedure. See the GROMACS 4 paper 5 for details on determining which non-bonded and bonded forces should be calculated on which rank. Dynamic load balancing# When different ranks have a different computational load (load imbalance), all ranks will have to wait for the one that takes the most time. One would like to avoid such a situation. Load imbalance can occur due to four reasons: • inhomogeneous particle distribution • inhomogeneous interaction cost distribution (charged/uncharged, water/non-water due to GROMACS water innerloops) • statistical fluctuation (only with small particle numbers) • differences in communication time, due to network topology and/or other jobs on the machine interfering with our communication So we need a dynamic load balancing algorithm where the volume of each domain decomposition cell can be adjusted independently. To achieve this, the 2- or 3-D domain decomposition grids need to be staggered. Fig. 12 shows the most general case in 2-D. Due to the staggering, one might require two distance checks for deciding if a charge group needs to be communicated: a non-bonded distance and a bonded distance check. By default, mdrun automatically turns on the dynamic load balancing during a simulation when the total performance loss due to the force calculation imbalance is 2% or more. Note that the reported force load imbalance numbers might be higher, since the force calculation is only part of work that needs to be done during an integration step. The load imbalance is reported in the log file at log output steps and when the -v option is used also on screen. The average load imbalance and the total performance loss due to load imbalance are reported at the end of the log file. There is one important parameter for the dynamic load balancing, which is the minimum allowed scaling. By default, each dimension of the domain decomposition cell can scale down by at least a factor of 0.8. For 3-D domain decomposition this allows cells to change their volume by about a factor of 0.5, which should allow for compensation of a load imbalance of 100%. The minimum allowed scaling can be changed with the -dds option of mdrun. The load imbalance is measured by timing a single region of the MD step on each MPI rank. This region can not include MPI communication, as timing of MPI calls does not allow separating wait due to imbalance from actual communication. The domain volumes are then scaled, with under-relaxation, inversely proportional with the measured time. This procedure will decrease the load imbalance when the change in load in the measured region correlates with the change in domain volume and the load outside the measured region does not depend strongly on the domain volume. In CPU-only simulations, the load is measured between the coordinate and the force communication. In simulations with non-bonded work on GPUs, we overlap communication and work on the CPU with calculation on the GPU. Therefore we measure from the last communication before the force calculation to when the CPU or GPU is finished, whichever is last. When not using PME ranks, we subtract the time in PME from the CPU time, as this includes MPI calls and the PME load is independent of domain size. This generally works well, unless the non-bonded load is low and there is imbalance in the bonded interactions. Then two issues can arise. Dynamic load balancing can increase the imbalance in update and constraints and with PME the coordinate and force redistribution time can go up significantly. Although dynamic load balancing can significantly improve performance in cases where there is imbalance in the bonded interactions on the CPU, there are many situations in which some domains continue decreasing in size and the load imbalance increases and/or PME coordinate and force redistribution cost increases significantly. As of version 2016.1, mdrun disables the dynamic load balancing when measurement indicates that it deteriorates performance. This means that in most cases the user will get good performance with the default, automated dynamic load balancing setting. Constraints in parallel# Since with domain decomposition parts of molecules can reside on different ranks, bond constraints can cross cell boundaries. This will not happen in GROMACS when update groups are used, which happens when only bonds involving hydrogens are constrained. Then atoms connected by constraints are assigned to the same domain. But without update groups a parallel constraint algorithm is required. GROMACS uses the P-LINCS algorithm 50, which is the parallel version of the LINCS algorithm 49 (see The LINCS algorithm). The P-LINCS procedure is illustrated in Fig. 13. When molecules cross the cell boundaries, atoms in such molecules up to (lincs_order + 1) bonds away are communicated over the cell boundaries. Then, the normal LINCS algorithm can be applied to the local bonds plus the communicated ones. After this procedure, the local bonds are correctly constrained, even though the extra communicated ones are not. One coordinate communication step is required for the initial LINCS step and one for each iteration. Forces do not need to be communicated. Interaction ranges# Domain decomposition takes advantage of the locality of interactions. This means that there will be limitations on the range of interactions. By default, mdrun tries to find the optimal balance between interaction range and efficiency. But it can happen that a simulation stops with an error message about missing interactions, or that a simulation might run slightly faster with shorter interaction ranges. A list of interaction ranges and their default values is given in Table 7 interaction range option default non-bonded \(r_c\)=max(\(r_{\mathrm{list}}\),\(r_{\mathrm{VdW}}\),\(r_{\mathrm{Coul}}\)) mdp file two-body bonded max(\(r_{\mathrm{mb}}\),\(r_c\)) mdrun -rdd starting conf. + 10% multi-body bonded \(r_{\mathrm{mb}}\) mdrun -rdd starting conf. + 10% constraints \(r_{\mathrm{con}}\) mdrun -rcon est. from bond lengths virtual sites \(r_{\mathrm{con}}\) mdrun -rcon 0 In most cases the defaults of mdrun should not cause the simulation to stop with an error message of missing interactions. The range for the bonded interactions is determined from the distance between bonded charge-groups in the starting configuration, with 10% added for headroom. For the constraints, the value of \(r_{\mathrm{con}}\) is determined by taking the maximum distance that ( lincs_order + 1) bonds can cover when they all connect at angles of 120 degrees. The actual constraint communication is not limited by \(r_{\mathrm{con}}\), but by the minimum cell size \(L_C\), which has the following lower limit: \[L_C \geq \max(r_{\mathrm{mb}},r_{\mathrm{con}})\] Without dynamic load balancing the system is actually allowed to scale beyond this limit when pressure scaling is used. Note that for triclinic boxes, \(L_C\) is not simply the box diagonal component divided by the number of cells in that direction, rather it is the shortest distance between the triclinic cells borders. For rhombic dodecahedra this is a factor of \(\sqrt{3/2}\) shorter along \(x \) and \(y\). When \(r_{\mathrm{mb}} > r_c\), mdrun employs a smart algorithm to reduce the communication. Simply communicating all charge groups within \(r_{\mathrm{mb}}\) would increase the amount of communication enormously. Therefore only charge-groups that are connected by bonded interactions to charge groups which are not locally present are communicated. This leads to little extra communication, but also to a slightly increased cost for the domain decomposition setup. In some cases, e.g. coarse-grained simulations with a very short cut-off, one might want to set \(r_{\mathrm {mb}}\) by hand to reduce this cost. Multiple-Program, Multiple-Data PME parallelization# Electrostatics interactions are long-range, therefore special algorithms are used to avoid summation over many atom pairs. In GROMACS this is usually PME (sec. PME). Since with PME all particles interact with each other, global communication is required. This will usually be the limiting factor for scaling with domain decomposition. To reduce the effect of this problem, we have come up with a Multiple-Program, Multiple-Data approach 5. Here, some ranks are selected to do only the PME mesh calculation, while the other ranks, called particle-particle (PP) ranks, do all the rest of the work. For rectangular boxes the optimal PP to PME rank ratio is usually 3:1, for rhombic dodecahedra usually 2:1. When the number of PME ranks is reduced by a factor of 4, the number of communication calls is reduced by about a factor of 16. Or put differently, we can now scale to 4 times more ranks. In addition, for modern 4 or 8 core machines in a network, the effective network bandwidth for PME is quadrupled, since only a quarter of the cores will be using the network connection on each machine during the PME calculations. mdrun will by default interleave the PP and PME ranks. If the ranks are not number consecutively inside the machines, one might want to use mdrun -ddorder pp_pme. For machines with a real 3-D torus and proper communication software that assigns the ranks accordingly one should use mdrun -ddorder cartesian. To optimize the performance one should usually set up the cut-offs and the PME grid such that the PME load is 25 to 33% of the total calculation load. grompp will print an estimate for this load at the end and also mdrun calculates the same estimate to determine the optimal number of PME ranks to use. For high parallelization it might be worthwhile to optimize the PME load with the mdp settings and/or the number of PME ranks with the -npme option of mdrun. For changing the electrostatics settings it is useful to know the accuracy of the electrostatics remains nearly constant when the Coulomb cut-off and the PME grid spacing are scaled by the same factor. Note that it is usually better to overestimate than to underestimate the number of PME ranks, since the number of PME ranks is smaller than the number of PP ranks, which leads to less total waiting time. The PME domain decomposition can be 1-D or 2-D along the \(x\) and/or \(y\) axis. 2-D decomposition is also known as pencil decomposition because of the shape of the domains at high parallelization. 1-D decomposition along the \(y\) axis can only be used when the PP decomposition has only 1 domain along \(x\). 2-D PME decomposition has to have the number of domains along \(x\) equal to the number of the PP decomposition. mdrun automatically chooses 1-D or 2-D PME decomposition (when possible with the total given number of ranks), based on the minimum amount of communication for the coordinate redistribution in PME plus the communication for the grid overlap and transposes. To avoid superfluous communication of coordinates and forces between the PP and PME ranks, the number of DD cells in the \(x\) direction should ideally be the same or a multiple of the number of PME ranks. By default, mdrun takes care of this issue. Domain decomposition flow chart# In Fig. 15 a flow chart is shown for domain decomposition with all possible communication for different algorithms. For simpler simulations, the same flow chart applies, without the algorithms and communication for the algorithms that are not used.
{"url":"https://manual.gromacs.org/2023.3/reference-manual/algorithms/parallelization-domain-decomp.html","timestamp":"2024-11-07T04:00:47Z","content_type":"text/html","content_length":"94709","record_id":"<urn:uuid:89195779-8223-4f5b-8296-0b0414d4baab>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00226.warc.gz"}
Rax: Composable Learning-to-Rank Using JAX Ranking is a core problem across a variety of domains, such as search engines, recommendation systems, or question answering. As such, researchers often utilize learning-to-rank (LTR), a set of supervised machine learning techniques that optimize for the utility of an entire list of items (rather than a single item at a time). A noticeable recent focus is on combining LTR with deep learning . Existing libraries, most notably TF-Ranking, offer researchers and practitioners the necessary tools to use LTR in their work. However, none of the existing LTR libraries work natively with JAX, a new machine learning framework that provides an extensible system of function transformations that compose: automatic differentiation, JIT-compilation to GPU/TPU devices and more. Today, we are excited to introduce Rax, a library for LTR in the JAX ecosystem. Rax brings decades of LTR research to the JAX ecosystem, making it possible to apply JAX to a variety of ranking problems and combine ranking techniques with recent advances in deep learning built upon JAX (e.g., T5X). Rax provides state-of-the-art ranking losses, a number of standard ranking metrics, and a set of function transformations to enable ranking metric optimization. All this functionality is provided with a well-documented and easy to use API that will look and feel familiar to JAX users. Please check out our paper for more technical details. Learning-to-Rank Using Rax Rax is designed to solve LTR problems. To this end, Rax provides loss and metric functions that operate on batches of lists, not batches of individual data points as is common in other machine learning problems. An example of such a list is the multiple potential results from a search engine query. The figure below illustrates how tools from Rax can be used to train neural networks on ranking tasks. In this example, the green items (B, F) are very relevant, the yellow items (C, E) are somewhat relevant and the red items (A, D) are not relevant. A neural network is used to predict a relevancy score for each item, then these items are sorted by these scores to produce a ranking. A Rax ranking loss incorporates the entire list of scores to optimize the neural network, improving the overall ranking of the items. After several iterations of stochastic gradient descent, the neural network learns to score the items such that the resulting ranking is optimal: relevant items are placed at the top of the list and non-relevant items at the bottom. Using Rax to optimize a neural network for a ranking task. The green items (B, F) are very relevant, the yellow items (C, E) are somewhat relevant and the red items (A, D) are not relevant. Approximate Metric Optimization The quality of a ranking is commonly evaluated using ranking metrics, e.g., the normalized discounted cumulative gain (NDCG). An important objective of LTR is to optimize a neural network so that it scores highly on ranking metrics. However, ranking metrics like NDCG can present challenges because they are often discontinuous and flat, so stochastic gradient descent cannot directly be applied to these metrics. Rax provides state-of-the-art approximation techniques that make it possible to produce differentiable surrogates to ranking metrics that permit optimization via gradient descent. The figure below illustrates the use of rax.approx_t12n, a function transformation unique to Rax, which allows for the NDCG metric to be transformed into an approximate and differentiable form. Using an approximation technique from Rax to transform the NDCG ranking metric into a differentiable and optimizable ranking loss (approx_t12n and gumbel_t12n). First, notice how the NDCG metric (in green) is flat and discontinuous, making it hard to optimize using stochastic gradient descent. By applying the rax.approx_t12n transformation to the metric, we obtain ApproxNDCG, an approximate metric that is now differentiable with well-defined gradients (in red). However, it potentially has many local optima — points where the loss is locally optimal, but not globally optimal — in which the training process can get stuck. When the loss encounters such a local optimum, training procedures like stochastic gradient descent will have difficulty improving the neural network further. To overcome this, we can obtain the gumbel-version of ApproxNDCG by using the rax.gumbel_t12n transformation. This gumbel version introduces noise in the ranking scores which causes the loss to sample many different rankings that may incur a non-zero cost (in blue). This stochastic treatment may help the loss escape local optima and often is a better choice when training a neural network on a ranking metric. Rax, by design, allows the approximate and gumbel transformations to be freely used with all metrics that are offered by the library, including metrics with a top-k cutoff value, like recall or precision. In fact, it is even possible to implement your own metrics and transform them to obtain gumbel-approximate versions that permit optimization without any extra effort. Ranking in the JAX Ecosystem Rax is designed to integrate well in the JAX ecosystem and we prioritize interoperability with other JAX-based libraries. For example, a common workflow for researchers that use JAX is to use TensorFlow Datasets to load a dataset, Flax to build a neural network, and Optax to optimize the parameters of the network. Each of these libraries composes well with the others and the composition of these tools is what makes working with JAX both flexible and powerful. For researchers and practitioners of ranking systems, the JAX ecosystem was previously missing LTR functionality, and Rax fills this gap by providing a collection of ranking losses and metrics. We have carefully constructed Rax to function natively with standard JAX transformations such as jax.jit and jax.grad and various libraries like Flax and Optax. This means that users can freely use their favorite JAX and Rax tools together. Ranking with T5 While giant language models such as T5 have shown great performance on natural language tasks, how to leverage ranking losses to improve their performance on ranking tasks, such as search or question answering, is under-explored. With Rax, it is possible to fully tap this potential. Rax is written as a JAX-first library, thus it is easy to integrate it with other JAX libraries. Since T5X is an implementation of T5 in the JAX ecosystem, Rax can work with it seamlessly. To this end, we have an example that demonstrates how Rax can be used in T5X. By incorporating ranking losses and metrics, it is now possible to fine-tune T5 for ranking problems, and our results indicate that enhancing T5 with ranking losses can offer significant performance improvements. For example, on the MS-MARCO QNA v2.1 benchmark we are able to achieve a +1.2% NDCG and +1.7% MRR by fine-tuning a T5-Base model using the Rax listwise softmax cross-entropy loss instead of a pointwise sigmoid cross-entropy loss. Fine-tuning a T5-Base model on MS-MARCO QNA v2.1 with a ranking loss (softmax, in blue) versus a non-ranking loss (pointwise sigmoid, in red). Overall, Rax is a new addition to the growing ecosystem of JAX libraries. Rax is entirely open source and available to everyone at github.com/google/rax. More technical details can also be found in our paper. We encourage everyone to explore the examples included in the github repository: (1) optimizing a neural network with Flax and Optax, (2) comparing different approximate metric optimization techniques, and (3) how to integrate Rax with T5X. Many collaborators within Google made this project possible: Xuanhui Wang, Zhen Qin, Le Yan, Rama Kumar Pasumarthi, Michael Bendersky, Marc Najork, Fernando Diaz, Ryan Doherty, Afroz Mohiuddin, and Samer Hassan. Shared by: Google AI Technology August 11, 2022 Tags: Archive
{"url":"https://cybercm.tech/blog/2022/08/11/rax-composable-learning-to-rank-using-jax/","timestamp":"2024-11-06T01:01:01Z","content_type":"text/html","content_length":"67197","record_id":"<urn:uuid:6e362ebb-e76d-4d62-aa8c-75866bbe6ae0>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00147.warc.gz"}
Question 171 ptsWhich of the following methods can - Asksia.ai Computer Science Posted 7 months ago K-nearest neighbors, Averaging (mean), Logistic regression Question 17 1 pts Which of the following methods can be used for imputing (=filling in missing values)? Select all that apply. K-nearest neighbours Averaging (mean) K-means clustering Logistic regression Answer from Sia Posted 7 months ago The methods that can be used for imputing missing values are: K-nearest neighbours: This method imputes missing values based on the k-nearest neighbors, where k is a user-defined constant and the nearest neighbors are calculated using a distance metric Averaging (mean): This method replaces missing values with the mean value of the entire feature column, which is suitable for numerical data K-means clustering: Although not a direct imputation method, K-means can be used to impute missing values by assigning them to the nearest cluster centroid Classification and Logistic regression are not typically used for imputing missing values as they are supervised learning methods used for predicting categorical outcomes, not for imputation.
{"url":"https://www.asksia.ai/question-and-answer/Question-171-ptsWhich-of-the-following-methods-can-be-used-for-imputing-filling-in-missing-values-Select-all-that-applyK-nearest-neighboursAveraging-m-aJ7tV","timestamp":"2024-11-08T19:09:22Z","content_type":"text/html","content_length":"82608","record_id":"<urn:uuid:c5b92cca-be03-4233-8217-62eb24476335>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00245.warc.gz"}
TV sets: According to the Nielsen Company, the mean number of TV sets in a U.S.... TV sets: According to the Nielsen Company, the mean number of TV sets in a U.S.... TV sets: According to the Nielsen Company, the mean number of TV sets in a U.S. household in 2013 was 2.24. Assume the standard deviation is1.1 . A sample of 80 households is drawn. Use the Cumulative Normal Distribution Table if needed. A. What is the probability that the sample mean number of TV sets is greater than 2? Round your answer to four decimal places. B. What is the probability that the sample mean number of TV sets is between 2.5 and 3? Round your answer to four decimal places. C. Find the 10th percentile of the sample mean. Round your answer to two decimal places. D. Would it be unusual for the sample mean to be less than 2? Round your answer to four decimal places. unusual because the probability of the sample mean being less than 2 is______ E. Do you think it would be unusual for an individual household to have fewer than 2 TV sets? Explain. Assume the population is approximately normal. Round your answer to four decimal places.
{"url":"https://justaaa.com/statistics-and-probability/524025-tv-sets-according-to-the-nielsen-company-the-mean","timestamp":"2024-11-06T08:51:50Z","content_type":"text/html","content_length":"42172","record_id":"<urn:uuid:23db11fc-9e3d-4993-b64a-84fb3bf399ce>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00748.warc.gz"}
Matrix columns allocation problems Orthogonal Frequency Division Multiple Access (OFDMA) transmission technique is gaining popularity as a preferred technique in the emerging broadband wireless access standards. Motivated by the OFDMA transmission technique we define the following problem: Let M be a matrix (over R) of size a × b. Given a vector of non-negative integers over(C, →) = 〈 c[1], c[2], ..., c[b] 〉 such that ∑ c[j] = a, we would like to allocate a cells in M such that (i) in each row of M there is a single allocation, and (ii) for each element c[i] ∈ over(C, →) there is a unique column in M which contains exactly c[i] allocations. Our goal is to find an allocation with minimal value, that is, the sum of all the a cells of M which were allocated is minimal. The nature of the suggested new problem is investigated in this paper. Efficient algorithms are suggested for some interesting cases. For other cases of the problem, NP-hardness proofs are given followed by inapproximability results. • Allocation problems • NP-completeness • inapproximability ASJC Scopus subject areas • Theoretical Computer Science • General Computer Science Dive into the research topics of 'Matrix columns allocation problems'. Together they form a unique fingerprint.
{"url":"https://cris.bgu.ac.il/en/publications/matrix-columns-allocation-problems","timestamp":"2024-11-07T03:49:22Z","content_type":"text/html","content_length":"58777","record_id":"<urn:uuid:3de3aad7-0c98-4adc-bdd9-8aeb16ad2cc0>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00634.warc.gz"}
A probabilistic framework for estimating pairwise distances through crowdsourcing Estimating all pairs of distances among a set of objects has wide applicability in various computational problems in databases, machine learning, and statistics. This work presents a probabilistic framework for estimating all pair distances through crowdsourcing, where the human workers are involved to provide distance between some object pairs. Since the workers are subject to error, their responses are considered with a probabilistic interpretation. In particular, the framework comprises of three problems: (1) Given multiple feedback on an object pair, how do we combine and aggregate those feedback and create a probability distribution of the distance? (2) Since the number of possible pairs is quadratic in the number of objects, how do we estimate, from the known feedback for a small numbers of object pairs, the unknown distances among all other object pairs? For this problem, we leverage the metric property of distance, in particular, the triangle inequality property in a probabilistic settings. (3) Finally, how do we improve our estimate by soliciting additional feedback from the crowd? For all three problems, we present principled modeling and solutions. We experimentally evaluate our proposed framework by involving multiple real-world and large scale synthetic data, by enlisting workers from a crowdsourcing platform. Original language English (US) Title of host publication Advances in Database Technology - EDBT 2017 Subtitle of host publication 20th International Conference on Extending Database Technology, Proceedings Editors Bernhard Mitschang, Volker Markl, Sebastian Bress, Periklis Andritsos, Kai-Uwe Sattler, Salvatore Orlando Publisher OpenProceedings.org Pages 258-269 Number of pages 12 ISBN (Electronic) 9783893180738 State Published - 2017 Event 20th International Conference on Extending Database Technology, EDBT 2017 - Venice, Italy Duration: Mar 21 2017 → Mar 24 2017 Publication series Name Advances in Database Technology - EDBT Volume 2017-March ISSN (Electronic) 2367-2005 Other 20th International Conference on Extending Database Technology, EDBT 2017 Country/Territory Italy City Venice Period 3/21/17 → 3/24/17 All Science Journal Classification (ASJC) codes • Information Systems • Software • Computer Science Applications Dive into the research topics of 'A probabilistic framework for estimating pairwise distances through crowdsourcing'. Together they form a unique fingerprint.
{"url":"https://researchwith.njit.edu/en/publications/a-probabilistic-framework-for-estimating-pairwise-distances-throu","timestamp":"2024-11-09T20:03:43Z","content_type":"text/html","content_length":"52972","record_id":"<urn:uuid:a8e2501a-7ce4-467b-9a26-6d71b89f6911>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00456.warc.gz"}
12. [Derivative I] | AP Calculus AB | Educator.com Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture. Section 1: Limits and Derivatives Overview & Slopes of Curves 42:08 More on Slopes of Curves 50:53 Example Problems for Slopes of Curves 59:12 Desmos Tutorial 18:43 The Limit of a Function 51:53 Example Problems for the Limit of a Function 24:43 Calculating Limits Mathematically 53:48 Example Problems for Calculating Limits Mathematically 21:22 Calculating Limits as x Goes to Infinity 50:01 Example Problems for Limits at Infinity 36:31 Continuity 53:00 Derivative I 40:02 Derivatives II 53:45 More Example Problems for The Derivative 31:38 Section 2: Differentiation Differentiation of Polynomials & Exponential Functions 47:35 The Product, Power & Quotient Rules 47:25 Derivatives of the Trigonometric Functions 41:08 The Chain Rule 24:56 More Chain Rule Example Problems 25:32 Implicit Differentiation 52:31 Section 3: Applications of the Derivative Linear Approximations & Differentials 47:34 Related Rates 45:33 More Related Rates Examples 37:17 Maximum & Minimum Values of a Function 40:44 Example Problems for Max & Min 40:44 The Mean Value Theorem 25:54 Using Derivatives to Graph Functions, Part I 25:54 Using Derivatives to Graph Functions, Part II 44:58 Example Problems I 49:19 Example Problems III 59:01 L'Hospital's Rule 30:09 Example Problems for L'Hospital's Rule 38:14 Optimization Problems I 49:59 Optimization Problems II 55:10 Newton's Method 30:22 Section 4: Integrals Antiderivatives 55:26 The Area Under a Curve 51:03 Example Problems for Area Under a Curve 33:07 The Definite Integral 43:19 Example Problems for The Definite Integral 32:14 The Fundamental Theorem of Calculus 24:17 Example Problems for the Fundamental Theorem 25:21 More Example Problems, Including Net Change Applications 34:22 Solving Integrals by Substitution 27:20 Section 5: Applications of Integration Areas Between Curves 34:56 Example Problems for Areas Between Curves 42:55 Volumes I: Slices 34:15 Volumes II: Volumes by Washers 51:43 Volumes III: Solids That Are Not Solids-of-Revolution 49:36 Volumes IV: Volumes By Cylindrical Shells 50:02 The Average Value of a Function 32:13 Section 6: Techniques of Integration Integration by Parts 50:32 Trigonometric Integrals I 24:50 Trigonometric Integrals II 22:12 More Example Problems for Trigonometric Integrals 17:22 Integration by Partial Fractions I 55:12 Integration by Partial Fractions II 42:57 Section 7: Differential Equations Introduction to Differential Equations 46:37 Separation of Variables 28:08 Population Growth: The Standard & Logistic Equations 51:07 Slope Fields 24:37 Section 8: AP Practic Exam AP Practice Exam: Section 1, Part A No Calculator 45:29 AP Practice Exam: Section 1, Part A No Calculator, cont. 41:55 AP Practice Exam: Section I, Part B Calculator Allowed 58:47 AP Practice Exam: Section II, Part A Calculator Allowed 25:40 AP Practice Exam: Section II, Part B No Calculator 31:20 Hello, welcome back to www.educator.com, welcome back to AP Calculus.0000 Today, we are going to start talking about the derivative formally.0004 When we started this course, we talked about this thing called the derivative.0014 We talked about the what it is and we said that the derivative is a slope of the curve, is a rate of change of the curve.0018 We also said that the how, we gave you an expression for the how.0027 It involves the limit, we have gone through this process of talking in a detailed way about the limit and0031 now we are going to come down to actually finding derivatives analytically.0038 Given f(x), given some function f(x), the derivative which we symbolize with the prime symbol f’(x) is =0070 the limit as h approaches 0 of f(x) + h - f(x)/ h.0093 Basically what this says at is, what this says is, when are given some function f(x), you are going to form this quotient.0103 You are going to take x + h, you are going to form that, whatever that is.0119 You are going to subtract from it f(x), you are going to divide by h.0123 You are going to simplify this expression and you are going to take the limit as h approaches 0.0126 What you are going to get is a function, that function is your derivative.0130 f’(x) = the limit as h approaches 0 of f(x) + h – f(x).0165 I think it is a good idea when you are doing these problems to actually start each problem by writing down the definition.0178 That = the limit as h approaches 0 of, f(x) + h, f is x³.0190 This is going to be x + h³, that is the first one, - f(x) which is x ^ / h.0199 Clearly, we cannot just plug in h as 0 because it would give us in the denominator.0212 In this case, simplification means just expanding, adding, subtracting, multiplying, dividing.0220 Doing whatever you have to do until we find a simplified expression, that we cannot do anything else to.0226 And then, we will take the limit again and see what happens.0230 This is equal to the limit as h approaches 0 of x³ + 3x² h + 3x h² + h³ - x³/ h which = the limit as h approaches 0.0234 We factor an h from the top, it equal the limit as h approaches 0 of h × 3x² + 3x h + h²/ h.0282 H goes away, you are left with the limit as h approaches 0 of the function 3x² + 3x h + h².0299 Now we plug in h, h was 0, you plug in 0.0318 That goes to 0, that goes to 0, you are left with 3x².0322 A derivative is two things, a derivative is many things actually.0341 A derivative is the slope of the tangent line to the curve, at a given x.0348 Notice this is a function, because when you are following a curve, the tangent line, the slope of the line actually changes.0366 we said that there was an average rate of change that is the secant line between two points.0383 If I take two points and draw a line, we call that the secant line.0389 The average rate of change, that is the slope of that line.0393 But if I have a tangent line, from that point, the slope of that is the instantaneous rate of change.0396 When I'm at that point, if I move a little bit to the left, to the right, how much is y going to change at that instant?0403 What we are hoping will become a conditioned response when you see derivative,0416 is that you are thinking two things, depending on what the problem is.0420 But you will think that this is the slope of the tangent line, to the curve at a given point.0423 And it is the instantaneous rate of change of the function at that point.0428 Derivative, instantaneous rate of change, and slope of tangent line.0436 Instantaneous rate of change of the function, at a given x.0440 We have seen f’(x), a lot of times we will leave off the x and we will just write f’.0465 You will see y’(x), if you express it as y = x³.0474 Then, the derivative is going to be y’ = 3x².0479 We leave the x off, we do it as y’.0484 This notation, this one is derived from Δ y/ Δ x.0494 Dy dx is the slope of the tangent of line.0512 That is a symbol that we use to describe the derivative, as opposed to the average.0518 Whenever, we form the quotient of any change in y/ any change in x,0547 no matter what those variables are, they could be t, s, q, p, whatever.0565 Whenever we form the quotient Δ y/ Δ x and it pass to the limit,0568 which is what we did right, it is the limit of this f(x) + h - f(x)/ h.0579 This is a change in y/ a change in x.0588 When we pass the limit, that just means when we take the limit of this expression as h goes to 0.0591 When we pass the limit, the Δ's turn into d.0595 This notation reminds us that we are talking about a slope and instantaneous rate of change.0609 If x changes by a really tiny amount, y changes by that tiny amount, that is what that means.0643 If I change x by 1, I change y by 3.0657 Recall that the slope and rate of change are synonymous.0670 You are going to hammer that point a lot, my apologies.0674 Recall that a slope and a rate of change are synonymous.0678 How do you spell synonymous, all of a sudden I forgot, are the same.0703 Recall that a rate of change is, recall what a rate of change is.0712 When we have a Δ y/ a Δ x, this is equivalent to saying,0730 when we change the x value by a unit amount, how much does y change?0741 When I change x by a unit amount, this number up on top gives me the amount by which y changes.0762 When we say that the function is x³ and when we differentiate it to get 3x², we have f’(x) = 3x².0771 Therefore, dy dx at x = 2, this is the symbolism that we use.0822 Now we can write it as f’ at 2, y’ at 2.0831 Dy/ dx, we put a line there and 2, like that, if you want.0834 I suppose it is not going to be the end of the world, if you did something like that,0838 As long as you understand it and the people that you are writing it for understand it.0844 dy dx at x = 2, dy dx is 3x².0849 It is going to be 3 × 2², it is going to be 12.0854 This means, when we are at the point 2, we said that x = 2.0863 f(2) is equal to 8 because f is equal to x³.0891 When we are at the point 2,8, from that point, if we move one unit to the left or to the right,0897 In this case, it is going to be down or up.0945 That is what this means, dy dx = 12, means dy dx is 12/1.0952 If I change x by 1 unit, I change y by 12 units.0957 Here I have my function y = x³, that is my red line.0968 We said that dy dx which is the derivative is equal to 3x², that is a function dy dx.0974 The reason it is a function is because the slope of the line, as you see, the slope along the curve changes.0983 dy dx at x = 2, we said it is equal to 12, which is the same as 12/1.0992 A tangent line is the one that is in the broken up black.1003 At that point, the tangent in line is that line right there.1013 It has a slope of 12, that is what the derivative means.1018 The slope of the tangent line is 12 at x = 2.1028 The instantaneous rate of change of f at x = 2 is 12.1040 From this point, if I move that way or this way by one unit, my function is going to change by 12 units.1061 When I put a specific x value in, it gives me the slope of the tangent line at that point.1077 It also gives me the rate at which the function changes from that point.1083 Notice that the derivative is a function of x itself.1094 f(x), you take the derivative, you get another function, which we symbolize f’(x).1141 Our example x³, we took the derivative and we got 3x².1146 In fact, in your problems, sometimes given the graph of f(x), you are going to be asked to graph f’(x).1169 Also sometimes given f’(x), sometimes you will be given the graph of the derivative of the function.1195 You have to recover, sometimes given f’(x), given the graph of f’(x) not just f’(x).1207 Given the graph of f’(x), you must recover the graph of the original f(x).1220 I’m sorry, let us look at x³ and its derivative 3x², together on the same graph.1240 The red is the x³ and the broken up black is the 3x².1249 This is the slope of f(x) at any, I should say, at the various values of x.1269 You want to take your time when dealing with these graphs,1290 so it can take you a little bit just to sort of wrap your mind around the fact1293 that you are talking about two things that are connected, but are somehow separate.1296 The derivative tells me what the slope of the graph is, at that point.1321 As I move x negative to positive, again, we are always moving from negative values, working our way that way.1326 Notice the slope of the curve here, the slope is positive which is y, and 3x²,1338 the derivative is the slope of the curve which is why it starts above the x axis.1349 But notice as x moves that way, the slope of the function is decreasing towards 0.1354 This shows that it starts up here, it is declining towards 0.1369 Here the slope of the function itself is 0, which is why the derivative function is 0.1378 But it is not just becoming positive, it is becoming more and more positive.1397 Therefore, the black line, the derivative is telling me that.1402 It is telling me that it is positive, the line itself, the graph of 3x², the derivative is above the x axis and it is increasing.1405 It starts to increase again, that is what this line is telling me.1424 Find the derivative of x⁴ - x², I wish I had not chosen such a complicated function.1448 I always do that, after 35 years, I still forget to write down my limit.1473 It equal the limit as h approaches 0 of f(x) + h - f(x)/ h.1479 I’m not going to keep writing it over and over again.1490 I will just simplify it and then we will take the limit at the end.1497 f(x) + h, this is going to be x + h⁴ – x + h².1501 Again, you plug in 0, you are going to have 0 in the denominator.1525 This is going to equal x⁴ + 4x³ h + 6x² h²1536 + 4x h³ + h⁴ - x² + 2x h + h².1548 And then, we are going to do this - this – that.1569 We are going to get 4x³ h + 6x² h² + 4x h³ + h⁴ - x² - 2x h - h² + x²/ h.1595 -x² and +x² cancel, I’m going to factor out an h.1625 It is going to be h × 4x³ + 6x² h + 4x h² + h³ - 2x - h/ h.1630 The h is canceled, I'm left with my final 4x³ + 6x² h + 4x h² + h³ - 2x – h.1652 Now we take the limit as h approaches 0 of this thing.1674 It is going to be 4x³ + 6x h + 4x h² + h³ - 2x – h.1681 h goes to 0, that goes to 0, that goes to 0, that goes to 0.1700 Let us take a look at f(x) and f’(x) in the same graph.1735 The red is your f(x), this was your x⁴ – that.1741 The derivative graph tells you what the slope is of the graph.1758 Here the slope is 0, here the slope of the function is 0.1774 Therefore, the derivative, this derivative graph, the black, 0, 0, 0.1779 Here it is negative, so it is below the axis.1789 This is negative, it is below the x axis, it hits 0.1800 After this point, the slope starts to become positive again.1804 The derivative is the description of how the slope is changing, as you move along the graph.1811 This is a description of how the slope of the original f(x) is changing.1823 It is a function of x in its own right.1828 For example, at x = that value, the slope is this.1834 Let us do example 3, find the derivative of f(x) = √x.1851 We actually did this in one of the example problems from a previous lesson, but let us do it again formally.1871 I think it is unbelievable, I never remember to write down my limit.1888 I remember to take the limit, but I never remember the write it down.1893 The limit as h goes to 0 of f(x) + h - f(x)/ h.1896 Again, I’m not going to write the limit over and over again.1908 I’m just going to go ahead and work with the function.1910 √x, f(x) + h is √x + h – √x, that is the f(x) divided by h.1916 I’m going to go ahead and multiply by the conjugate of the numerator.1928 It is going to be √x + h + √x/ √x + h + √x.1933 I end up with x + h - x/ h × √x + h + √x.1944 That and that go away, leaving me just h on top.1956 h and h cancel, I’m left with 1/ √x + h + √x.1958 The limit as h goes to 0 of 1/ √x + h + √x.1971 Now I’m going to ask the question, what is dy dx at x = 3?2030 Very simple, dy dx at x = 3 is equal to, just plug in 3, 1/ 2√3.2041 f, we said that f(x), the original function is just √x.2069 The tangent line to the graph for x = 3, touches the graph at the point x f(x), which is equal to 3, that is our x value.2085 That x value is going to give me y value.2150 The point is going to be your xy or your x f(x).2153 The derivative at that point is the slope of the line.2157 Now you have a slope and you have a point that it passes through.2161 The equation of the tangent line is, we know that it is y - y1 = m × x - x1.2170 Here it is going to be y - y1 is √3 = the slope which is 1/ 2√3 × x – 3.2184 The red is the graph, the black is the tangent line to the graph at x = 3.2206 At the value x = 3, I go up here.2215 This line, the equation of this line, we just found it.2235 This is y - √3 = 1/ 2√3 × x – 3.2240 Just plug in the x value and you get the y value.2256 f(x) = √x, f’(x) 1/ 2√x, it is that simple.2267 Let us go ahead and take a look at the graph and its derivative as functions of x.2323 Here is the function y = √x, this is the derivative.2332 It is y prime, it is equal to 1/ 2√x.2337 The graph, the original graph, the derivative, the black, describes how the slope changes as x gets bigger and bigger.2343 Always a positive slope along the curve but the positive slope is decreasing.2362 The tangent line is actually dropping down getting close to a slope of 0.2374 High, positive, positive, positive, positive, but the slope is getting closer to 0.2381 If I want to know what the slope of the tangent line at that point is, it is going to be that number right there.2387 Thank you so much for joining us here at www.educator.com.2399 Our free lessons will get you started (Adobe Flash® required). Get immediate access to our entire library.
{"url":"https://www.educator.com/mathematics/ap-calculus-ab/hovasapian/derivative-i.php","timestamp":"2024-11-08T20:46:12Z","content_type":"application/xhtml+xml","content_length":"672555","record_id":"<urn:uuid:3a1fe04a-6d4c-458e-9fb7-c5a95c697286>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00632.warc.gz"}
flooring tile | no. of floor tiles needed for flooring When it comes to flooring options, tile is a popular choice for its durability, versatility, and aesthetic appeal. However, determining the number of floor tiles needed for a space can be a daunting task. In this article, we will explore the various factors to consider when calculating the required quantity of tiles for your flooring project. From measuring the area to accounting for waste and pattern layouts, we will provide you with the necessary information to help you make an informed decision and achieve a flawless tile installation. Let’s dive in and discover how to calculate the exact number of floor tiles needed for your next flooring project. How to calculate number of flooring tile needed for floor Calculating the number of flooring tiles needed for a floor is a crucial step in any construction or renovation project. It ensures that the right amount of materials is purchased and minimizes waste. As a civil engineer, it is important to have a clear understanding of the process to accurately calculate the number of flooring tiles required for a floor. Here are the steps to calculate the number of flooring tiles needed for a floor: 1. Measure the floor area: The first step is to measure the total area of the floor. Use a measuring tape to measure the length and width of the floor in feet. Make sure to measure the area accurately, including any indentations or irregularities in the floor. 2. Determine the size of the tiles: The next step is to determine the size of the tiles that will be used for the flooring. The size of the tile is usually mentioned on the packaging. It is typically denoted in inches. 3. Calculate the area of each tile: To calculate the area of each tile, multiply the length and width of the tile together. For example, if the tile is 12 inches long and 6 inches wide, the area of the tile would be 72 square inches (12 x 6 = 72). 4. Convert the tile size to square feet: Since the floor area is measured in square feet, we need to convert the tile size from square inches to square feet. To convert, divide the area of the tile by 144 (since there are 144 square inches in a square foot). Using the example above, the tile size in square feet would be 0.5 square feet (72 ÷ 144 = 0.5). 5. Divide the floor area by the tile size: To determine the number of tiles needed, divide the total floor area by the tile size in square feet. The result will be the number of tiles needed to cover the entire floor. For example, if the floor area is 200 square feet and the tile size is 0.5 square feet, the number of tiles needed would be 400 tiles (200 ÷ 0.5 = 400). 6. Account for waste: In any construction project, it is important to account for some waste. To avoid running short on tiles, it is recommended to add 5-10% extra tiles to your calculation to account for waste, breakage, and cuts. 7. Consider the pattern: If you plan to lay the tiles in a specific pattern, it is important to factor that in when calculating the number of tiles needed. For example, if you are using tiles of different sizes or different patterns, it may affect the number of tiles needed to cover the floor area. In conclusion, calculating the number of flooring tiles needed for a floor is a relatively simple process. By following these steps, you can accurately determine the number of tiles needed, avoiding any extra costs or delays in your project. As a civil engineer, it is important to pay attention to details and ensure precise calculations for a successful flooring project. what is skirting tile Skirting tile, also known as baseboard tile, is a type of ceramic or porcelain tile that is used as a finishing touch to cover the bottom portion of walls where they meet the floor. It is typically installed as a decorative and functional element to protect the wall from any damage or stains from mops, vacuum cleaners, or other household uses. Skirting tile is available in a variety of sizes, colors, and patterns to match the flooring and interior design of a space. It can be made of the same material as the main floor tiles or can be a contrasting color to create a visual contrast and add depth to the room. The most common shapes for skirting tiles are rectangular or square, with a thickness ranging from 6 to 10 millimeters. They can also have a coved or rounded edge to help prevent dirt and debris from accumulating in the corners. In some cases, skirting tiles may also have a decorative design or texture for a more sophisticated look. The installation of skirting tile involves attaching it to the wall with adhesive or using special brackets. Before installation, the wall surface must be clean, dry, and level to ensure a proper and smooth installation. It is also important to leave a small gap between the skirting tile and the floor to allow for expansion and contraction due to temperature changes. Apart from their decorative role, skirting tiles serve a functional purpose as well. They provide a smooth transition between the wall and floor, hiding any gaps and imperfections in the flooring. They also serve as a protective barrier against moisture and humidity, preventing water from seeping into the walls and causing damage. In addition to residential spaces, skirting tiles are widely used in commercial buildings such as offices, hotels, and public areas. They are also commonly used in bathrooms and kitchens, where water resistance is crucial. In conclusion, skirting tile is an essential element in interior design and construction, providing a finishing touch to a space while also serving a practical function. With its variety of styles, colors, and sizes, it offers endless options to enhance the overall aesthetics of any room and protect walls from wear and tear. why we need skirting tiles Skirting tiles are an essential part of any building or interior design project. These tiles are installed at the base of walls, typically at the junction of the wall and the floor. They are available in different materials such as ceramic, porcelain, natural stone, and even wood, and they serve a variety of functions. Here are some reasons why we need skirting tiles: 1) Protection: Skirting tiles provide protection to walls against damage, especially in high-traffic areas. They act as a barrier and prevent the walls from getting scuffed, scratched, or chipped by furniture, vacuums, or other objects. This is especially important in areas such as hallways and staircases where there is a high risk of potential damage to walls. 2) Aesthetics: Skirting tiles help in enhancing the aesthetics of a space. They add a finished and polished look to walls by covering up the uneven edges and gaps between the walls and floors. They also provide a neat and seamless transition from floor to wall, which gives a sophisticated look to any space. 3) Easy maintenance: Skirting tiles make cleaning and maintenance of walls much easier. Without them, dust, dirt, and other debris can accumulate at the base of the walls, and it can be challenging to clean these hard-to-reach areas. Skirting tiles act as a barrier and reduce the accumulation of dirt, making it easier to maintain the cleanliness of walls. 4) Hiding wires and cables: In the age of technology, it is common to have electrical wires and cables running along the base of walls. Skirting tiles provide an efficient and neat solution to hide these wires, making the space look more organized and clutter-free. 5) Moisture protection: In areas that are prone to moisture, such as bathrooms and kitchens, skirting tiles provide an extra layer of protection to walls. They prevent water from seeping into the walls, which can cause damage and mold growth. Skirting tiles made from materials such as ceramic or porcelain are water-resistant and thus, protect the walls from any moisture-related issues. 6) Temperature control: Skirting tiles help in insulating a room, especially when installed with a gap between the floor and wall. This space acts as an air buffer, preventing cold air from the floor and warm air from the walls, promoting energy efficiency and reducing energy bills. 7) Flexibility: Skirting tiles come in a variety of styles, sizes, colors, and materials, making them a versatile design option. They can be used to complement the overall interior design theme and create a cohesive look in a space. With the wide range of options available, skirting tiles can add a unique touch to any room. In conclusion, skirting tiles are a crucial element in any building or interior design project. They provide protection, enhance aesthetics, ease maintenance, hide wires, protect against moisture, help in temperature control, and offer design flexibility. As a civil engineer, incorporating skirting tiles in building design is essential for the overall functionality and aesthetic appeal of a flooring tile calculation in square meter As a civil engineer, it is my responsibility to ensure that buildings are constructed with accurate measurements and proper materials. One important aspect of building construction is flooring tile calculation in square meters. Flooring tiles are commonly used in residential, commercial, and industrial buildings and accurately calculating the required number of tiles is crucial for the overall functionality and aesthetic appeal of the space. Calculating flooring tile in square meters involves determining the surface area to be covered and then determining the number of tiles needed to cover that area. Here are the steps for measuring and calculating flooring tiles in square meters: 1. Measure the area: The first step is to measure the length and width of the floor using a measuring tape. Make sure to measure all corners and irregularities in the shape of the room accurately to get the most precise measurement. 2. Determine the total area: Once the measurements have been taken, multiply the length by the width to get the total area in square meters. For example, if the length is 5 meters and the width is 4 meters, the total area would be 20 square meters. 3. Calculate the waste factor: It is essential to account for the waste factor, which is the percentage of extra tiles needed to cover the floor due to cutting and trimming. The waste factor varies depending on the type of tile and can range from 5% to 15%. To determine the waste factor, multiply the total area by the waste factor percentage and add it to the total area. For example, if the total area is 20 square meters and the waste factor is 10%, the total area with waste would be 20+ 20*0.1 = 22 square meters. 4. Find the tile size: Most flooring tiles come in standard sizes such as 30x30cm, 40x40cm, or 60x60cm. The actual size of the tile may be slightly smaller, so make sure to check the tile manufacturer’s specification sheet. 5. Calculate the number of tiles needed: To determine the number of tiles, divide the total area with waste by the area of the tile. For example, if the total area with waste is 22 square meters, and the tile size is 40x40cm, the number of tiles needed would be 22/(0.4*0.4) = 137.5. Round up the number to the nearest whole number to get the final tile count, which in this case would be 138. In conclusion, calculating flooring tiles in square meters involves measuring the area, accounting for the waste factor, determining the tile size, and calculating the number of tiles needed. Accurate calculation is crucial for the cost estimation and proper installation of flooring tiles. As a civil engineer, it is my duty to ensure that all measurements are precise and the appropriate number of tiles is used to create a functional and visually appealing floor. How to calculate number of floor tiles needed for floor Calculating the number of floor tiles needed for a floor is an important step in any construction or renovation project. It ensures that you purchase the right amount of tiles and avoid any wastage or shortage. Here is a step-by-step guide on how to calculate the number of floor tiles needed for a floor. 1. Measure the area of the floor: The first step is to measure the area of the floor where you want to install the tiles. This can be done by multiplying the length and width of the floor in feet. For example, if the floor is 10 feet long and 8 feet wide, the total area will be 10 x 8 = 80 square feet. 2. Determine the size of the tiles: Floor tiles come in various sizes such as 12×12 inches, 18×18 inches, and 24×24 inches. You need to decide on the size of the tiles you want to use and make a note of it. 3. Calculate the number of tiles per square foot: Based on the tile size, you can calculate the number of tiles per square foot. For example, if you are using 12×12 inch tiles, there will be 12 tiles per square foot (12/1 = 12). 4. Convert the area into square inches: To make the calculation more accurate, you need to convert the area of the floor into square inches. You can do this by multiplying the area (in square feet) by 144 (1 square foot = 144 square inches). In our example, the area will be 80 x 144 = 11,520 square inches. 5. Divide the area by the tile size: Next, divide the area (in square inches) by the tile size to get the number of tiles needed for the floor. Using the example of 12×12 inch tiles, the calculation will be 11,520/144 = 80 tiles. This means you will need 80 tiles to cover the entire floor. 6. Add extra tiles for wastage: It is always a good idea to add 5-10% extra tiles to your calculation to allow for any wastage or cutting of the tiles during installation. In our example, adding 5% extra tiles would be 80 + (80 x 0.05) = 84 tiles. 7. Consider pattern and layout: If you are planning to lay the tiles in a specific pattern or layout, you may need to add more tiles to your calculation. For example, if you are using a diamond pattern or herringbone pattern, you will need more tiles as they require cutting and shaping to fit the pattern. 8. Calculate for each room: If you have multiple rooms with different tile sizes and layouts, you will need to repeat the above calculation for each room and add the results to get the total number of tiles needed. By following these steps, you can accurately calculate the number of floor tiles needed for any floor. It is always better to consult with a professional or a tile supplier for guidance and advice to ensure you purchase the right amount of tiles for your project. In conclusion, choosing the right flooring tile for your space requires careful consideration of factors such as durability, maintenance, and aesthetic appeal. With the multitude of tile options available, it is important to properly measure and calculate the number of tiles needed to cover your floor. By following the tips and guidelines mentioned in this article, you can confidently plan and purchase the appropriate amount of floor tiles for your flooring project. Whether it’s for a small bathroom or a large living room, a well-chosen and properly installed flooring tile can elevate the look and functionality of any space. So, make sure to take the time to research and invest in the right flooring tile for your home or commercial space. Leave a Comment
{"url":"https://civilstep.com/flooring-tile-no-of-floor-tiles-needed-for-flooring/","timestamp":"2024-11-14T23:53:26Z","content_type":"text/html","content_length":"225790","record_id":"<urn:uuid:1902be31-cf79-44b2-80fb-015661b83710>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00509.warc.gz"}
Sum if begins with in Excel November 12, 2024 - Excel Office Sum if begins with in Excel This tutorial shows how to Sum if begins with in Excel using the example below; To sum cells if other cells begin with a specific value, you can use the SUMIF function. In the example shown, cell G6 contains this formula: This formula sums the amounts in column D when a value in column C begins with “t-shirt”. Note that SUMIF is not case-sensitive. How the formula works The SUMIF function supports wildcards. An asterisk (*) means “one or more characters”, while a question mark (?) means “any one character”. These wildcards allow you to create criteria such as “begins with”, “ends with”, “contains 3 characters” and so on. To match all items that begin with “T-shirt”, the criteria is “t-shirt*”. Note that you must enclose literal text and the wildcard in double quotes (“”). Alternative with SUMIFS You can also use the SUMIFS function to sum if cells begin with. SUMIFS can handle multiple criteria, and the order of the arguments is different from SUMIF. The equivalent SUMIFS formula is: Notice that the sum range always comes first in the SUMIFS function.
{"url":"https://www.xlsoffice.com/others/sum-if-begins-with-in-excel/","timestamp":"2024-11-12T06:18:40Z","content_type":"text/html","content_length":"62323","record_id":"<urn:uuid:c1f3e46c-6f90-4cdd-a7dd-d5e04bc19fb5>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00302.warc.gz"}
Wilderness generation using Voronoi diagrams Reposted from an old post on rec.games.roguelike.development. I'm starting to think about re-doing wilderness generation for Unangband, a variant I've developed from Angband. I've spent considerable amounts of time developing a complex, dynamic terrain system, and whilst this is fun in the dungeon, I think for the player to fully appreciate it, I'm going to have to include a more complex wilderness than Unangband currently features. Unangband currently has a fixed graph-based wilderness travel. Each wilderness location is the same size as a dungeon, which contains one or more terrain types generated by a drunken walk type algorithm, over a background terrain, and may feature dungeon rooms connected by trails, which are guaranteed to be traversable. To travel between wilderness areas, you press '<' on the surface level, and can travel to a number of 'adjacent' locations, which increases as you defeat the boss monster (Angband unique) featured at the bottom of the dungeons. At some points, the graph is one way - you can't travel back to the original location once you get there. Some wilderness locations act as towns, one-screen sized locations containing a variety of shops. Dungeons under a wilderness location are distinguished by the types of locations, terrain types on the level generated by the same drunken walk algorithm as surface terrain, the background terrain through which corridors are tunnelled, and boss monster at the bottom of the dungeon, which is always fixed. Unangband's current terrain system has received praise for being focused and simple. You don't spend forever wandering around a wilderness level - just until you find a set of stairs down. The wilderness levels are just dungeons with open space instead of walls around the terrain (they are still surrounded by permanent walls at the edges). Unfortunately, the wilderness graph is not particularly easy to understand. Players complain that wilderness locations appear and disappear randomly as they move (a consequence of the implementation of the graph). In particular, its hard to make it clear that some graph locations are 'one-way'. Its also hard to make clear that some locations have no dungeon, just a surface boss monster, who, for play-balance reasons, only appears at night. I've looked at a number of wilderness implementations, and have focused on Zangband's in particular (at least the new wilderness implementation). The terrain in Zangband's wilderness system, for want of a better word, is beautiful, and something I aspire to. However, the Zangband terrain generation system suffers from a number of serious problems, in particular, that the algorithms used are overly complicated, there is not enough incentive to travel between wilderness locations (I know the Zangband development team are improving this) and that the difficulty level is too high for starting out characters and exploitable for higher level characters. Recently, I ran across an indirect reference to Voronoi diagrams in this newsgroup, and realised that this will provide the solution to my issues with the Zangband wilderness. I'll run through what I intend to do with them, and raise some questions that hopefully someone here can point me in the direction of. The new Unangband wilderness will be divided up into regions, by randomly generating sufficient spoints on a large wilderness map (4096 x 4096) and using a Voronoi diagram to divide the space into the regions (each map grid will be associated with the region of the nearest point generated). Take the Delauney triangulation of the points and select a vertex (region point) and give it a difficulty 0. Give its neighbours a difficulty 1, and so on, 'flood-filling' the graph until all vertexs are assigned difficulties. Then normalise these difficulties against the desired difficulty range of the whole wilderness. The region data structure can then be expanded as per the following pseudo-structure: point (x,y) int difficulty_level text region_name dungeon_type dungeon race_type monster terrain_type terrain To select a terrain type can be done using a variety of methods. I quite like the Zangband 'three-space' terrain parameters, law (which corresponds to difficulty_level above), population and altitude. Its possible to generate this fractally by picking one or more random low and high points on the Delauney triangulation, and interpolating between then (perhaps using a fractal algorithm, and/or weighting the differences based on the distance between region points). You could pick points in a manner that guarantees that all possible terrain is available on each map, that Zangband cannot do currently. Its also more efficient than the Zangband method, because once you've generated this one, you can store the selected terrain type, and throw away the generation Once you have the terrain type for the region, the actual terrain in each map grid can be determine a number of ways. I suspect I'll have the following: 1. Open regions, with randomly / fractally placed point terrain. These should be selected so that terrain types that are likely to be adjacent have some terrain types in common, so terrain transitions across between two terrains without creating a hard edge. 2. Building regions - as open regions above, but a rectangular feature is placed within the bounds of the region. 3. Field-type regions (as in farmers' fields), which are filled with passable terrain but have an impassable edge, with a gate/bridge placed at a random location on the edge. If two field regions are next to each other, the region with the lower difficulty level does not place the edge. 2. Hill-type regions, where the height is calculated as the distance from the point to the edge of the region, and the slope of the height change determines the terrain (flat, slope, or wall). 5. Mountain / lake-type regions, which are filled up with an impassable terrain up to an hard / fractal distance from the edge. 6. Completely impassable regions. I will use the Delauney triangulation graph to ensure every passable node is accessible. It should be possible to 'merge' adjacent regions of the same type, so that any regions that share a common edge type / centre type do not generate edge terrain when next to each other. Because I will be storing regions, it should be easier to replace region types, in the event, for instance, I want to have a huge building in on a magma island surrounded by lava that takes up multiple regions space. Of course, I'm going to need some fairly good programming to get the above done, but I can see how to proceed. Firstly, can someone point me in the direction of a fast integer based look-up algorithm to determine which region a map-grid is in? Ideally, I need a data-structure that supports searching the minimum number of points. Ideally in C. I also need a fast algorithm to draw the terrain on a subset of the whole map. Obviously, I don't want to have every map-grid in memory. I'm thinking of adopting Zangband's in memory representation of 16x16 grid patches, that can be generated quickly when scrolling into the map area, and destroyed as a player moves away. Alternately, I'll have to have a large scrolling window looking down on the total map and generate larger areas as the player moves. So I need a fast drawing algorithm for each of the above terrain types, that doesn't create gaps. So some kind of rasterisation algorithm for a 2 dimensional Voronoi diagram please, and a good suggestion as what memory management technique to adopted (patches / scrolling window / etc.) Also, ideally in C. Finally, any suggested strategies for generating and representing in memory the Delauney triangulation. This is only required for the initial region generation - however, I may also use it to determine difficulties for overland quests (by finding the highest difficulty of the regions required to cross to travel to the quest location). 3 comments: James McNeill said... I like your idea of flood-filling difficulty level through the regions, but be aware that this may not do exactly what you want on the edges of your map, because distant nodes can end up as neighbors in the Delaunay triangulation. (Here's an illustration.) You might need to trim skinny triangles off the outer perimeter or something like that. James McNeill said... Come to think of it, you might get better luck just scaling difficulty with distance from the player start point, as long as the landscape is fairly open. In general I'd think you would want to scale difficulty by travel distance/difficulty. If it takes special equipment to get up into mountains, for instance, the encounters up there might be more difficult even if it's close to an easy region, since the player can't get there until they've acquired crampons and ice axes. I'm looking over some code I wrote to generate Delaunay triangulations years ago. Here's a sketch of one way to do it: Create a simple initial triangulation that encloses all of the points. This could be a pair of triangles forming a rectangle, for instance. Ensure it is Delaunay (that is, the circle through by any three points does not contain any other points). For a rectangle this property always holds. Insert the points into the triangulation one at a time, ensuring the empty-circumcircle property still holds after each one: 1. Find the existing triangle containing the new point and split it into three new ones such that the new point is now part of the triangulation. 2. Restore the empty circumcircle property by turning edges around the new point. When you're done you can strip off the outer triangles connected to the original rectangle corner points, if you like, to get a triangulation of only the points of interest. The Wikipedia article says pretty much the same stuff. For actual implementation you'll find a half-edge data structure pretty useful, I would think. Another way is to come up with some triangulation, any triangulation, of the points. Then go through and turn edges wherever you find a point inside a circumcircle. If you attack them in an organized fashion, you can get the triangulation into Delaunay shape fairly quickly. Andrew Doull said... Thanks for the advice. I had in the back of my head similar ideas to your mountain suggestion, where difficult to traverse locations could seperate locations of extreme difficulty (e.g. mountains, seas, walls of fire or ice etc). I also quite like the idea of being able to choose which path to take based on a difficulty slope e.g. the adjacent locations are either +1 or +2 difficulty. That way you can navigate a path of +2 difficulty increases if you find a powerful item that boosts your overall survival chances. It'd be important to distinguish the +2 difficulty slopes (dead bodies, piles of skulls, warning signs etc) of course. This is equivalent to diving down multiple stairs quickly.
{"url":"https://roguelikedeveloper.blogspot.com/2007/07/wilderness-generation-using-voronoi.html","timestamp":"2024-11-08T06:18:48Z","content_type":"application/xhtml+xml","content_length":"90456","record_id":"<urn:uuid:7df2703c-54ee-480b-96a9-5c324b5d87a0>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00047.warc.gz"}
Hide Zero Cards Printable Hide Zero Cards Printable - Web learn how to use hide zero cards to model and write teen numbers with place value. Web browse hide zero cards resources on teachers pay teachers, a marketplace trusted by millions of teachers for original. Find worksheets, solutions, examples and videos for kindergarten. Web download free printable cards to help children see the value of each digit within a number. Hide zero cards cut cards on dotted lines. Web hide zero cards cut cards on dotted lines to make hide zero cards. Students will see how numbers can be composed (put together) and decomposed (taken. O 4 8 1 5 9 2 6 1 3 7 o. The cards show the expanded. Web here are hide zero cards that are used mostly with grades kindergarten through grade 3. Other Hide Zero Cards Web download free printable cards to help children see the value of each digit within a number. Web learn how to use hide zero cards to model and write teen numbers with place value. Find worksheets, solutions, examples and videos for kindergarten. The cards show the expanded. Students will see how numbers can be composed (put together) and decomposed (taken. Place Value (Hide Zero) Cards Build Math Minds Web hide zero cards cut cards on dotted lines to make hide zero cards. Web here are hide zero cards that are used mostly with grades kindergarten through grade 3. Web browse hide zero cards resources on teachers pay teachers, a marketplace trusted by millions of teachers for original. Web download free printable cards to help children see the value. Place Value (Hide Zero) Cards Build Math Minds The cards show the expanded. O 4 8 1 5 9 2 6 1 3 7 o. Find worksheets, solutions, examples and videos for kindergarten. Students will see how numbers can be composed (put together) and decomposed (taken. Web hide zero cards cut cards on dotted lines to make hide zero cards. Place Value (Hide Zero) Cards Build Math Minds Hide zero cards cut cards on dotted lines. Students will see how numbers can be composed (put together) and decomposed (taken. O 4 8 1 5 9 2 6 1 3 7 o. Web hide zero cards cut cards on dotted lines to make hide zero cards. Web learn how to use hide zero cards to model and write teen. Place Value (Hide Zero) Cards Build Math Minds Web download free printable cards to help children see the value of each digit within a number. O 4 8 1 5 9 2 6 1 3 7 o. Hide zero cards cut cards on dotted lines. Web here are hide zero cards that are used mostly with grades kindergarten through grade 3. Find worksheets, solutions, examples and videos for. These hide zero cards really help my students see that 10 is a friendly Hide zero cards cut cards on dotted lines. Web here are hide zero cards that are used mostly with grades kindergarten through grade 3. Web learn how to use hide zero cards to model and write teen numbers with place value. Students will see how numbers can be composed (put together) and decomposed (taken. O 4 8 1 5 9. Place Value (Hide Zero) Cards Build Math Minds Web hide zero cards cut cards on dotted lines to make hide zero cards. Hide zero cards cut cards on dotted lines. Web browse hide zero cards resources on teachers pay teachers, a marketplace trusted by millions of teachers for original. Students will see how numbers can be composed (put together) and decomposed (taken. Find worksheets, solutions, examples and videos. Freebies Build Math Minds Web browse hide zero cards resources on teachers pay teachers, a marketplace trusted by millions of teachers for original. Students will see how numbers can be composed (put together) and decomposed (taken. Web hide zero cards cut cards on dotted lines to make hide zero cards. Web download free printable cards to help children see the value of each digit. Eureka Hide Zero Cards Printable Cards Students will see how numbers can be composed (put together) and decomposed (taken. The cards show the expanded. Web learn how to use hide zero cards to model and write teen numbers with place value. O 4 8 1 5 9 2 6 1 3 7 o. Web browse hide zero cards resources on teachers pay teachers, a marketplace trusted. Eureka Hide Zero Cards Printable Cards O 4 8 1 5 9 2 6 1 3 7 o. Web browse hide zero cards resources on teachers pay teachers, a marketplace trusted by millions of teachers for original. Web download free printable cards to help children see the value of each digit within a number. Hide zero cards cut cards on dotted lines. Web here are hide. Find worksheets, solutions, examples and videos for kindergarten. Web browse hide zero cards resources on teachers pay teachers, a marketplace trusted by millions of teachers for original. Web here are hide zero cards that are used mostly with grades kindergarten through grade 3. Hide zero cards cut cards on dotted lines. Students will see how numbers can be composed (put together) and decomposed (taken. The cards show the expanded. Web learn how to use hide zero cards to model and write teen numbers with place value. Web hide zero cards cut cards on dotted lines to make hide zero cards. O 4 8 1 5 9 2 6 1 3 7 o. Web download free printable cards to help children see the value of each digit within a number. Web Hide Zero Cards Cut Cards On Dotted Lines To Make Hide Zero Cards. Students will see how numbers can be composed (put together) and decomposed (taken. The cards show the expanded. Hide zero cards cut cards on dotted lines. Web download free printable cards to help children see the value of each digit within a number. Web Here Are Hide Zero Cards That Are Used Mostly With Grades Kindergarten Through Grade 3. O 4 8 1 5 9 2 6 1 3 7 o. Find worksheets, solutions, examples and videos for kindergarten. Web browse hide zero cards resources on teachers pay teachers, a marketplace trusted by millions of teachers for original. Web learn how to use hide zero cards to model and write teen numbers with place value. Related Post:
{"url":"https://68ore.plansverige.org/en/hide-zero-cards-printable.html","timestamp":"2024-11-08T11:43:24Z","content_type":"text/html","content_length":"26732","record_id":"<urn:uuid:3aeff010-2d5a-4987-a3f5-732af792f285>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00423.warc.gz"}
2.3E: Exercises Last updated Page ID \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\) \( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\) \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\) \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vectorC}[1]{\textbf{#1}} \) \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \) \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \) \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \) \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\ evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\ newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y} \) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real} {\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec} [3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array} {r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\ wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\ newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var} {\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\ bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\ widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\) Practice Makes Perfect Use a Problem Solving Strategy for Word Problems List five positive thoughts you can say to yourself that will help you approach word problems with a positive attitude. You may want to copy them on a sheet of paper and put it in the front of your notebook, where you can read them often. Answers will vary. List five negative thoughts that you have said to yourself in the past that will hinder your progress on word problems. You may want to write each one on a small piece of paper and rip it up to symbolically destroy the negative thoughts. In the following exercises, solve using the problem solving strategy for word problems. Remember to write a complete sentence to answer each question. There are 16 girls in a school club. The number of girls is four more than twice the number of boys. Find the number of boys. six boys There are 18 Cub Scouts in Troop 645. The number of scouts is three more than five times the number of adult leaders. Find the number of adult leaders. Huong is organizing paperback and hardback books for her club’s used book sale. The number of paperbacks is 12 less than three times the number of hardbacks. Huong had 162 paperbacks. How many hardback books were there? 58 hardback books Jeff is lining up children’s and adult bicycles at the bike shop where he works. The number of children’s bicycles is nine less than three times the number of adult bicycles. There are 42 adult bicycles. How many children’s bicycles are there? Solve Number Word Problems In the following exercises, solve each number word problem. The difference of a number and 12 is three. Find the number. The difference of a number and eight is four. Find the number. The sum of three times a number and eight is 23. Find the number. The sum of twice a number and six is 14. Find the number. The difference of twice a number and seven is 17. Find the number. The difference of four times a number and seven is 21. Find the number. Three times the sum of a number and nine is 12. Find the number. Six times the sum of a number and eight is \(30\). Find the number. One number is six more than the other. Their sum is \(42\). Find the numbers. \(18, 24\) One number is five more than the other. Their sum is \(33\). Find the numbers. The sum of two numbers is 20. One number is four less than the other. Find the numbers. \(8, 12\) The sum of two numbers is 27. One number is seven less than the other. Find the numbers. One number is 14 less than another. If their sum is increased by seven, the result is 85. Find the numbers. \(32, 46\) One number is 11 less than another. If their sum is increased by eight, the result is 71. Find the numbers. The sum of two numbers is 14. One number is two less than three times the other. Find the numbers. \(4, 10\) The sum of two numbers is zero. One number is nine less than twice the other. Find the numbers. The sum of two consecutive integers is 77. Find the integers. \(38, 39\) The sum of two consecutive integers is 89. Find the integers. The sum of three consecutive integers is 78. Find the integers. \(25, 26, 27\) The sum of three consecutive integers is 60. Find the integers. Find three consecutive integers whose sum is \(−36\). Find three consecutive integers whose sum is \(−3\). Find three consecutive even integers whose sum is 258. \(84, 86, 88\) Find three consecutive even integers whose sum is 222. Find three consecutive odd integers whose sum is \(−213\). Find three consecutive odd integers whose sum is \(−267\). Philip pays $1,620 in rent every month. This amount is $120 more than twice what his brother Paul pays for rent. How much does Paul pay for rent? Marc just bought an SUV for $54,000. This is $7,400 less than twice what his wife paid for her car last year. How much did his wife pay for her car? Laurie has $46,000 invested in stocks and bonds. The amount invested in stocks is $8,000 less than three times the amount invested in bonds. How much does Laurie have invested in bonds? Erica earned a total of $50,450 last year from her two jobs. The amount she earned from her job at the store was $1,250 more than three times the amount she earned from her job at the college. How much did she earn from her job at the college? Solve Percent Applications In the following exercises, translate and solve. ⓐ What number is 45% of 120? ⓑ 81 is 75% of what number? ⓐ What percent of 260 is 78? ⓐ 54 ⓑ 108 ⓐ 30% ⓐ What number is 65% of 100? ⓑ 93 is 75% of what number? ⓐ What percent of 215 is 86? ⓐ 250% of 65 is what number? ⓑ 8.2% of what amount is $2.87? ⓐ 30 is what percent of 20? ⓐ162.5 ⓑ $35 ⓐ 150% ⓐ 150% of 90 is what number? ⓑ 6.4% of what amount is $2.88? ⓐ 50 is what percent of 40? In the following exercises, solve. Geneva treated her parents to dinner at their favorite restaurant. The bill was $74.25. Geneva wants to leave 16% of the total bill as a tip. How much should the tip be? When Hiro and his co-workers had lunch at a restaurant near their work, the bill was $90.50. They want to leave 18% of the total bill as a tip. How much should the tip be? One serving of oatmeal has 8 grams of fiber, which is 33% of the recommended daily amount. What is the total recommended daily amount of fiber? 24.2 g One serving of trail mix has 67 grams of carbohydrates, which is 22% of the recommended daily amount. What is the total recommended daily amount of carbohydrates? A bacon cheeseburger at a popular fast food restaurant contains 2070 milligrams (mg) of sodium, which is 86% of the recommended daily amount. What is the total recommended daily amount of sodium? 2407 mg A grilled chicken salad at a popular fast food restaurant contains 650 milligrams (mg) of sodium, which is 27% of the recommended daily amount. What is the total recommended daily amount of sodium? The nutrition fact sheet at a fast food restaurant says the fish sandwich has 380 calories, and 171 calories are from fat. What percent of the total calories is from fat? The nutrition fact sheet at a fast food restaurant says a small portion of chicken nuggets has 190 calories, and 114 calories are from fat. What percent of the total calories is from fat? Emma gets paid $3,000 per month. She pays $750 a month for rent. What percent of her monthly pay goes to rent? Dimple gets paid $3,200 per month. She pays $960 a month for rent. What percent of her monthly pay goes to rent? In the following exercises, solve. Tamanika received a raise in her hourly pay, from $15.50 to $17.36. Find the percent change. Ayodele received a raise in her hourly pay, from $24.50 to $25.48. Find the percent change. Annual student fees at the University of California rose from about $4,000 in 2000 to about $12,000 in 2010. Find the percent change. The price of a share of one stock rose from $12.50 to $50. Find the percent change. A grocery store reduced the price of a loaf of bread from $2.80 to $2.73. Find the percent change. The price of a share of one stock fell from $8.75 to $8.54. Find the percent change. Hernando’s salary was $49,500 last year. This year his salary was cut to $44,055. Find the percent change. In ten years, the population of Detroit fell from 950,000 to about 712,500. Find the percent change. In the following exercises, find ⓐ the amount of discount and ⓑ the sale price. Janelle bought a beach chair on sale at 60% off. The original price was $44.95. ⓐ $26.97 ⓑ $17.98 Errol bought a skateboard helmet on sale at 40% off. The original price was $49.95. In the following exercises, find ⓐ the amount of discount and ⓑ the discount rate (Round to the nearest tenth of a percent if needed.) Larry and Donna bought a sofa at the sale price of $1,344. The original price of the sofa was $1,920. ⓐ $576 ⓑ 30% Hiroshi bought a lawnmower at the sale price of $240. The original price of the lawnmower is $300. In the following exercises, find ⓐ the amount of the mark-up and ⓑ the list price. Daria bought a bracelet at original cost $16 to sell in her handicraft store. She marked the price up 45%. What was the list price of the bracelet? ⓐ $7.20 ⓑ $23.20 Regina bought a handmade quilt at original cost $120 to sell in her quilt store. She marked the price up 55%. What was the list price of the quilt? Tom paid $0.60 a pound for tomatoes to sell at his produce store. He added a 33% mark-up. What price did he charge his customers for the tomatoes? ⓐ $0.20 ⓑ $0.80 Flora paid her supplier $0.74 a stem for roses to sell at her flower shop. She added an 85% mark-up. What price did she charge her customers for the roses? Solve Simple Interest Applications In the following exercises, solve. Casey deposited $1,450 in a bank account that earned simple interest at an interest rate of 4%. How much interest was earned in two years? Terrence deposited $5,720 in a bank account that earned simple interest at an interest rate of 6%. How much interest was earned in four years? Robin deposited $31,000 in a bank account that earned simple interest at an interest rate of 5.2%. How much interest was earned in three years? Carleen deposited $16,400 in a bank account that earned simple interest at an interest rate of 3.9% How much interest was earned in eight years? Hilaria borrowed $8,000 from her grandfather to pay for college. Five years later, she paid him back the $8,000, plus $1,200 interest. What was the rate of simple interest? Kenneth lent his niece $1,200 to buy a computer. Two years later, she paid him back the $1,200, plus $96 interest. What was the rate of simple interest? Lebron lent his daughter $20,000 to help her buy a condominium. When she sold the condominium four years later, she paid him the $20,000, plus $3,000 interest. What was the rate of simple interest? Pablo borrowed $50,000 to start a business. Three years later, he repaid the $50,000, plus $9,375 interest. What was the rate of simple interest? In 10 years, a bank account that paid 5.25% simple interest earned $18,375 interest. What was the principal of the account? In 25 years, a bond that paid 4.75% simple interest earned $2,375 interest. What was the principal of the bond? Joshua’s computer loan statement said he would pay $1,244.34 in simple interest for a three-year loan at 12.4%. How much did Joshua borrow to buy the computer? Margaret’s car loan statement said she would pay $7,683.20 in simple interest for a five-year loan at 9.8%. How much did Margaret borrow to buy the car? Everyday Math Tipping At the campus coffee cart, a medium coffee costs $1.65. MaryAnne brings $2.00 with her when she buys a cup of coffee and leaves the change as a tip. What percent tip does she leave? Tipping Four friends went out to lunch and the bill came to $53.75 They decided to add enough tip to make a total of $64, so that they could easily split the bill evenly among themselves. What percent tip did they leave? Writing Exercises What has been your past experience solving word problems? Where do you see yourself moving forward? Answers will vary. Without solving the problem “44 is 80% of what number” think about what the solution might be. Should it be a number that is greater than 44 or less than 44? Explain your reasoning. After returning from vacation, Alex said he should have packed 50% fewer shorts and 200% more shirts. Explain what Alex meant. Answers will vary. Because of road construction in one city, commuters were advised to plan that their Monday morning commute would take 150% of their usual commuting time. Explain what this means. Self Check ⓐ After completing the exercises, use this checklist to evaluate your mastery of the objective of this section. ⓑ After reviewing this checklist, what will you do to become confident for all objectives?
{"url":"https://math.libretexts.org/Courses/Borough_of_Manhattan_Community_College/Professor's_Playground/MAT_206.5_Intermediate_Algebra_and_Precalculus_alpha/1%3A_Solving_Linear_Equations/2.03%3A_Use_a_Problem_Solving_Strategy/2.3E%3A_Exercises","timestamp":"2024-11-09T16:11:54Z","content_type":"text/html","content_length":"147487","record_id":"<urn:uuid:15dc0d0e-9979-4780-a7ef-a2d57df24431>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00350.warc.gz"}
Coding Basics 2 Delving deeply into variables: Strings vs. Numbers, Primitives, Math Operators. You're beginning to code, now! Duration: 42:35 Created with Ben Fhala is the creative force and founder behind 02geek, a pioneering platform dedicated to making web development accessible to everyone. With over 18 years of experience in the industry, Ben has a deep passion for teaching and a knack for breaking down complex concepts into easy-to-understand lessons. Coding Basics 2 Overview What you'll learn It's time to tackle, tangle, and tango with the concept of variables. You'll see how you can use variables to add numbers and strings, or perform basic mathematical operations more efficiently. We finish with an overview of primitive values - and not the kind painted on cave walls!. In our mission to learn how to become a developer, things are getting more complicated. We will learn about the importance of variables, brackets, operators and even talk more deeply about data types What Opens Must Close Quick detour - I want to let you in on a little secret. It's my favorite shortcut in the book and will help you manage the many brackets with which you work. Let's study string addition! We will see how we add strings together and how we add numbers together. And if you think that isn't enough, we'll even look at error 1084 and figure out how to avoid it. Mixing Strings and Numbers When working with numbers and strings it's important to prevent Compiler (Flash) from automatically converting data so you'll not be automatically confused. Converting Strings to Numbers How do we take that string and let Flash know we actually want it to be a number? The answer is simple: learn a new function. This function type is called Casting. Using Variables So, we have variables and we know how to create them but how can we actually use them and for what? We agree, it's time to see them in action through this video. What are Primitives? It’s important to differentiate between primitive and complex data types in Flash. Have no clue what primitive values are, well jump in and let's figure it out! Deeper Look: Defining Variables Now that we know what variables are it's time to look deeper into their structure and how to play with them. Also, we'll learn about a new error type we can now check to avoid - Error 1120. Deeper Look: Math Operators Things are starting to fall into the right place! Lets revisit the math operators and add a few new tricks and shortcuts. Also learn about a new operator (%) used to find the modulus of two numbers. Numbers, int And unit Though all three types Number, int and uint are used to represent numbers, there are few minor differences between them which help us save time. Lets learn what are they and when should we use them Boolean are simple. They are really only place holders that can hold only two possible values: true or false (0 or 1, yes or no...) Lets learn how Boolean variables are used in programming.
{"url":"https://02geek.com/course/coding-basics-2.html","timestamp":"2024-11-10T01:54:05Z","content_type":"text/html","content_length":"36183","record_id":"<urn:uuid:6cf36d2a-189c-4e47-b7a1-d4264dafb5cc>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00354.warc.gz"}
IFIC Literature Database Arbelaez, C., Carcamo Hernandez, A. E., Cepedello, R., Hirsch, M., & Kovalenko, S. (2019). Radiative type-I seesaw neutrino masses. Phys. Rev. D, 100(11), 115021–7pp. Arbelaez, C., Carcamo Hernandez, A. E., Cepedello, R., Kovalenko, S., & Schmidt, I. (2020). Sequentially loop suppressed fermion masses from a single discrete symmetry. J. High Energy Phys., 06 (6), 043–24pp. Arbelaez, C., Cepedello, R., Fonseca, R. M., & Hirsch, M. (2020). (g-2) anomalies and neutrino mass. Phys. Rev. D, 102(7), 075005–14pp. Arbelaez, C., Cepedello, R., Helo, J. C., Hirsch, M., & Kovalenko, S. (2022). How many 1-loop neutrino mass models are there? J. High Energy Phys., 08(8), 023–29pp. Arbelaez, C., Cottin, G., Helo, J. C., & Hirsch, M. (2020). Long-lived charged particles and multilepton signatures from neutrino mass models. Phys. Rev. D, 101(9), 095033–13pp. Arbelaez, C., Dib, C., Monsalvez-Pozo, K., & Schmidt, I. (2021). Quasi-Dirac neutrinos in the linear seesaw model. J. High Energy Phys., 07(7), 154–22pp. Arbelaez, C., Fonseca, R. M., Romao, J. C., & Hirsch, M. (2013). Supersymmetric SO(10)-inspired GUTs with sliding scales. Phys. Rev. D, 87(7), 075010–19pp. Arbelaez, C., Gonzalez, M., Hirsch, M., & Kovalenko, S. G. (2016). QCD corrections and long-range mechanisms of neutrinoless double beta decay. Phys. Rev. D, 94(9), 096014–5pp. Arbelaez, C., Gonzalez, M., Kovalenko, S. G., & Hirsch, M. (2017). QCD-improved limits from neutrinoless double beta decay. Phys. Rev. D, 96(1), 015010–12pp. Arbelaez, C., Helo, J. C., & Hirsch, M. (2019). Long-lived heavy particles in neutrino mass models. Phys. Rev. D, 100(5), 055001–15pp.
{"url":"https://references.ific.uv.es/refbase/search.php?sqlQuery=SELECT%20author%2C%20title%2C%20type%2C%20year%2C%20publication%2C%20abbrev_journal%2C%20volume%2C%20issue%2C%20pages%2C%20keywords%2C%20abstract%2C%20thesis%2C%20editor%2C%20publisher%2C%20place%2C%20abbrev_series_title%2C%20series_title%2C%20series_editor%2C%20series_volume%2C%20series_issue%2C%20edition%2C%20language%2C%20author_count%2C%20online_publication%2C%20online_citation%2C%20doi%2C%20serial%20FROM%20refs%20WHERE%20author%20RLIKE%20%22Arbelaez%2C%20C%5C%5C.%22%20ORDER%20BY%20author%2C%20year%20DESC%2C%20publication&submit=Cite&citeStyle=APA&citeOrder=&orderBy=author%2C%20year%20DESC%2C%20publication&headerMsg=&showQuery=0&showLinks=1&formType=sqlSearch&showRows=10&rowOffset=0&client=&viewType=","timestamp":"2024-11-13T10:54:12Z","content_type":"text/html","content_length":"83126","record_id":"<urn:uuid:71f91265-d18b-45d3-b049-c3917d03b0ed>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00589.warc.gz"}
How do you find the local extremas for f(x)=xe^x? | HIX Tutor How do you find the local extremas for #f(x)=xe^x#? Answer 1 There is a relative minimum at the point $\left(- 1 , - \frac{1}{e}\right)$. We can say that if #x_0# is a turning point of the function #f# then #f' (x_0) = 0#. Therefore, the return points of the function #f# will be among the solutions of the equation #f' (x) = 0#. So we will equate the derivative of the function to zero and then we will search among the solutions those in which the derivative has a change of sign. If the derivative is positive, we know that the function is increasing, whereas if the derivative is negative, then the function is decreasing. When the derivative changes from negative to positive, the function has a local minimum, whereas if the change of sign is reversed, that is, from positive to negative, then the function has a local In the case of the function #f (x) = x cdot e^x#, we have: #f' (x) = e^x + x cdot e^x = (1 + x) cdot e^x# Equaling to zero we have: #(1 + x) cdot e^x = 0 color(white) {.} iff color(white) {.} 1 + x = 0color(white) {.} iff color(white) {.}x = - 1# It is easy to verify that, for values of #x < - 1#, the derivative is negative, #f' (x) < 0#, while for values of #x > - 1#, the derivative is positive, #f' (x) > 0#. This means that #x = - 1# is a relative minimun. The #y# coordinate is obtained by replacing the value of #x# in the equation of the function. Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 To find the local extrema of the function ( f(x) = xe^x ), you first need to find its critical points by taking the derivative and setting it equal to zero. Then, you can use the second derivative test to determine whether these critical points correspond to local maxima, local minima, or points of inflection. 1. Take the derivative of ( f(x) ): [ f'(x) = e^x + xe^x ] 2. Set the derivative equal to zero and solve for ( x ) to find critical points: [ e^x + xe^x = 0 ] [ e^x(1 + x) = 0 ] This equation yields a critical point at ( x = -1 ). 3. Apply the second derivative test: [ f''(x) = e^x + xe^x ] Evaluate ( f''(-1) ): [ f''(-1) = e^{-1} - e^{-1} = 0 ] Since ( f''(-1) = 0 ), the second derivative test is inconclusive. In this case, you can examine the behavior of the function around the critical point. By observing the behavior of the function, you can determine that ( f(x) ) has a local minimum at ( x = -1 ). Therefore, the local minimum of ( f(x) = xe^x ) occurs at ( x = -1 ). Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 3 To find the local extrema for (f(x) = x e^x), we first find the critical points by setting the derivative equal to zero and solving for (x). Then, we use the second derivative test to determine whether each critical point corresponds to a local maximum, local minimum, or neither. 1. Find the derivative of (f(x)): [f'(x) = e^x + xe^x] 2. Set the derivative equal to zero and solve for (x): [e^x + xe^x = 0] [e^x(1 + x) = 0] This equation equals zero when (e^x = 0) or (1 + x = 0). However, (e^x) is never zero, so we solve (1 + x = 0) for (x): [1 + x = 0] [x = -1] 3. Test the critical point (x = -1) using the second derivative test: Calculate the second derivative of (f(x)): [f''(x) = e^x + e^x + xe^x] [f''(x) = 2e^x + xe^x] Evaluate (f''(-1)): [f''(-1) = 2e^{-1} + (-1)e^{-1} = \frac{2}{e} - \frac{1}{e} = \frac{1}{e}] Since (f''(-1) = \frac{1}{e} > 0), the critical point (x = -1) corresponds to a local minimum. Therefore, the function (f(x) = xe^x) has a local minimum at (x = -1). Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/how-do-you-find-the-local-extremas-for-f-x-xe-x-8f9af9fb36","timestamp":"2024-11-09T22:24:31Z","content_type":"text/html","content_length":"593307","record_id":"<urn:uuid:3e2028cd-5cce-4ff4-b04c-300e56d6da9e>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00239.warc.gz"}
need help with statistics assignment this class is kicking my butt - Academicscare.com According to the U.S. Geological Survey (USGS), the probability of a magnitude 6.7 or greater earthquake in the Greater Bay Area is 63%, about 2 out of 3, in the next 30 years. In April 2008, scientists and engineers released a new earthquake forecast for the State of California called the Uniform California Earthquake Rupture Forecast (UCERF). As a junior analyst at the USGS, you are tasked to determine whether there is sufficient evidence to support the claim of a linear correlation between the magnitudes and depths from the earthquakes. Your deliverables will be a PowerPoint presentation you will create summarizing your findings and an excel document to show your work. Concept being Studied • Correlation and regression • Creating scatterplots • Constructing and interpreting a Hypothesis Test for Correlation using r as the test statistic You are given a spreadsheet that contains the following information: • Magnitude measured on the Richter scale • Depth in km Using the spreadsheet, you will answer the problems below in a PowerPoint presentation. What to Submit The PowerPoint presentation should answer and explain the following questions based on the spreadsheet provided above. • Slide 1: Title slide • Slide 2: Introduce your scenario and data set including the variables provided. • Slide 3: Construct a scatterplot of the two variables provided in the spreadsheet. Include a description of what you see in the scatterplot. • Slide 4: Find the value of the linear correlation coefficient r and the critical value of r using α = 0.05. Include an explanation on how you found those values. • Slide 5: Determine whether there is sufficient evidence to support the claim of a linear correlation between the magnitudes and the depths from the earthquakes. Explain. • Slide 6: Find the regression equation. Let the predictor (x) variable be the magnitude. Identify the slope and the y-intercept within your regression equation. • Slide 7: Is the equation a good model? Explain. What would be the best predicted depth of an earthquake with a magnitude of 2.0? Include the correct units. • Slide 8: Conclude by recapping your ideas by summarizing the information presented in context of the scenario. Along with your PowerPoint presentation, you should include your Excel document which shows all calculations. Which of the following is not a correct value for a linear correlation coefficient for sample data r? Select one: a. 0.0012 b. 1/7 c. – 0.95 d. 1.0002 A correlation coefficient of -0.95 indicates what kind of relations between the two variables? Select one: a. Strong positive correlation b. Weak negative correlation c. Strong negative correlation d. No correlation Question 3 The relationship between coefficient of correlation and coefficient of determination is that: Select one: a. They are unrelated b. The coefficient of determination is the coefficient of correlation squared c. The coefficient of correlation is the coefficient of determination squared d. They are equal Question 4 When determining whether a correlation exists, it is a good idea to first explore the data by plotting a scatter plot. Select one: Question 5 a. The strength of correlation between the dependent and independent variables b. The difference between two variables c. Standard error of estimate d. The percent of variations in the dependent variable explained by the independent variables Question 6 Regression equations are often useful for predicting the value of one variable, given a value of the other variable. Select one: Question 7 Which one of the following values is not required to calculate the correlation coefficient r? Select one: a. The number of pairs of sample data n b. The sum of all values of x Σx c. The sum of all values of x2y2 Σx2y2 d. The sum of x multiplied by y Σxy Question 8 The most commonly used formula to describe the linear regression is: Select one: Question 9 Which of the following is not a name for the straight line that best fits the scatter plot of paired sample data? Select one: a. Regression line b. Line of best fit c. Scatter line d. Least-squares line Question 10 A correlation exists between two variables only when the values of one variable are very strongly associated with the values of the other variable. Select one: Question 11 Which of the following is not a property of the linear correlation coefficient r? Select one: a. – 1 ≤ r ≤ 1 b. x and y are interchangeable c. r is a measurement of the strength of a linear relationship d. r is not sensitive to outliers Question 12 If we determine that there is a correlation between poverty rate and crime rate in a city, then we can conclude that the increase in poverty causes people to commit more crime. Select one: Question 13 If the regression equation is not a good model, means there is no linear correlation, how can we use a sample to find the predicted value of y? Select one: a. Use the mean of the actual y values b. Use the mode of the actual y values c. Use the median of the actual y values d. We cannot use sample data to make any predictions Question 14 If the absolute value of correlation coefficient |r| is bigger than the critical value, which of the following conclusions is correct? Select one: a. There is no sufficient evidence to support the claim of a linear correlation. b. There is sufficient evidence to support the claim of a linear correlation. c. There may or may not be a linear correlation between the two variables. d. There is sufficient evidence to support the claim of a non-linear correlation. Question 15 When we interpret the determination coefficient r2, we are saying that Select one: a. For each unit increase in x, we will see an increase or decrease in the predicted variable y b. The sample is significantly different from the population c. There is a strong positive or negative relationship between the variables d. Some portion of the dependent variable co-varies with some portion of the independent variable Question 16 Predicted y = 20000 + 650x, where x = years of post-secondary educations and y = starting annual income. How is this regression equation interpreted? Select one: a. For every year increase in income, education increases by $650. b. For every year increase in education, expected starting income increases by $650. c. For every year increase in education, expected starting income decreases by $650. d. If x were equal to zero, income would be predicted to be $650. Question 17 When two variables are not related at all, how would you attach a quantitative measure to that situation? Select one: a. Correlation coefficient r<0 b. Correlation coefficient r≤0 c. Correlation coefficient r=0 d. No quantitative measure exists Question 18 How will you construct a hypothesis test for correlation using r as the test statistic? Select one: a. H0: Ï = 0 (no correlation); Ha: Ï â‰ 0 (there is a correlation) b. H0: r = 0 (no correlation); Ha: r ≠0 (there is a correlation) c. H0: Ï â‰ 0 (no correlation); Ha: Ï = 0 (there is a correlation) d. H0: Ï â‰ 0 (there is a correlation); Ha: Ï = 0 (no correlation) Question 19 The value of determination coefficient r2 indicates the proportion of the variation in y that is explained by the linear relationship between x and y. Select one: Question 20 What is a correct conclusion when | r | ≤ critical value? Select one: a. Reject the null hypothesis and conclude that there is sufficient evidence to support the claim of a linear correlation. b. Fail to reject the null hypothesis and conclude that there is no sufficient evidence to support the claim of a linear correlation. c. Fail to reject the null hypothesis and conclude there is sufficient evidence to support the claim of a linear correlation. d. Reject the null hypothesis and conclude that there is no sufficient evidence to support the claim of a linear correlation. "Order a similar paper and get 100% plagiarism free, professional written paper now!" http://academicscare.com/wp-content/uploads/2021/04/academicscaretr-300x54.png 0 0 Davie http://academicscare.com/wp-content/uploads/2021/04/academicscaretr-300x54.png Davie2021-04-09 09:07:44 2021-04-09 09:07:44need help with statistics assignment this class is kicking my butt
{"url":"https://academicscare.com/need-help-with-statistics-assignment-this-class-is-kicking-my-butt/","timestamp":"2024-11-14T13:58:27Z","content_type":"text/html","content_length":"55896","record_id":"<urn:uuid:6ea09ce8-6012-4410-bd8f-7e8b8066a605>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00096.warc.gz"}
Logistic or linear? Estimating causal effects of experimental treatments on binary outcomes using regression analysis. When the outcome is binary, psychologists often use nonlinear modeling strategies such as logit or probit. These strategies are often neither optimal nor justified when the objective is to estimate causal effects of experimental treatments. Researchers need to take extra steps to convert logit and probit coefficients into interpretable quantities, and when they do, these quantities often remain difficult to understand. Odds ratios, for instance, are described as obscure in many textbooks (e.g., Gelman & Hill, 2006, p. 83). I draw on econometric theory and established statistical findings to demonstrate that linear regression is generally the best strategy to estimate causal effects of treatments on binary outcomes. Linear regression coefficients are directly interpretable in terms of probabilities and, when interaction terms or fixed effects are included, linear regression is safer. I review the Neyman-Rubin causal model, which I use to prove analytically that linear regression yields unbiased estimates of treatment effects on binary outcomes. Then, I run simulations and analyze existing data on 24,191 students from 56 middle schools (Paluck, Shepherd, & Aronow, 2013) to illustrate the effectiveness of linear regression. Based on these grounds, I recommend that psychologists use linear regression to estimate treatment effects on binary outcomes. All Science Journal Classification (ASJC) codes • Experimental and Cognitive Psychology • Developmental Neuroscience • General Psychology • average treatment effects • binary outcomes • causal effects • linear regression • logistic regression Dive into the research topics of 'Logistic or linear? Estimating causal effects of experimental treatments on binary outcomes using regression analysis.'. Together they form a unique fingerprint.
{"url":"https://collaborate.princeton.edu/en/publications/logistic-or-linear-estimating-causal-effects-of-experimental-trea","timestamp":"2024-11-05T06:41:54Z","content_type":"text/html","content_length":"52353","record_id":"<urn:uuid:822e5735-21a2-4293-8f31-a4ae292777cc>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00626.warc.gz"}
Streaming Cache Placement Problems: Complexity and Algorithms Virtual private networks (VPN) are often used to distribute live content, such as video or audio streams, from a single source to a large number of destinations. Streaming caches or splitters are deployed in these multicast networks to allow content distribution without overloading the network. In this paper, we consider two combinatorial optimization problems that arise in multicast networks. In the Tree Cache Placement Problem (TCPP), the objective is to find a routing tree on which the number of cache nodes needed for multicasting is minimized. We also discuss a modification of this problem, called the Flow Cache Placement Problem (FCPP), where we seek any feasible flow from the source to the destinations which minimizes the number of cache nodes. We prove that these problems are NP-hard using a transformation from SATISFIABILITY. This transformation allows us to give a proof of non-approximability by showing that it is gap-preserving. We also consider approximation algorithms for the TCPP and FCPP and special cases where these problems can be solved in polynomial time. AT&T Technical Report, May 30, 2003. View Streaming Cache Placement Problems: Complexity and Algorithms
{"url":"https://optimization-online.org/2003/05/662/","timestamp":"2024-11-13T15:22:23Z","content_type":"text/html","content_length":"85046","record_id":"<urn:uuid:ab09bcda-2fc4-4bf3-8c0b-c577fc7ce174>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00027.warc.gz"}
Algorithms for Big Data, Fall 2017 15-859: Algorithms for Big Data, Fall 2017 Instructor: David Woodruff Lecture time: Thursdays, 15:00-17:30 (15:00-16:10, break till 16:20, 16:20-17:30), GHC 4303. TA: Dhivya Eswaran David's office hours: Mondays, 13:00-14:00 in GHC 7217 or by appointment. Dhivya's recitation: Fridays, 10:30-11:20 in Baker Hall 235A Dhivya's office hours: Wednesdays, 10:00-11:00 in GHC 6008 or by appointment. Grading Latex Course Description Lectures Recitations Problem sets References Grading is based on problem sets, scribing a lecture, and a presentation/project. There will be no exams. General information about the breakdown for the grading is available here: grading.pdf Homework solutions, scribe notes, and final projects must be typeset in LaTeX. If you are not familiar with LaTeX, see this introduction. A template for your scribe notes is here: template.tex • Recitation 1 slides (review of linear algebra and probability) • Many of the recitations ended up not using slides. The TA will post material used in recitations to piazza. Problem sets Course Description With the growing number of massive datasets in applications such as machine learning and numerical linear algebra, classical algorithms for processing such datasets are often no longer feasible. In this course we will cover algorithmic techniques, models, and lower bounds for handling such data. A common theme is the use of randomized methods, such as sketching and sampling, to provide dimensionality reduction. In the context of optimization problems, this leads to faster algorithms, and we will see examples of this in the form of least squares regression and low rank approximation of matrices and tensors, as well as robust variants of these problems. In the context of distributed algorithms, dimensionality reduction leads to communication-efficient protocols, while in the context of data stream algorithms, it leads to memory-efficient algorithms. We will study some of the above problems in such models, such as low rank approximation, but also consider a variety of classical streaming problems such as counting distinct elements, finding frequent items, and estimating norms. Finally we will study lower bound methods in these models showing that many of the algorithms we covered are optimal or near-optimal. Such methods are often based on communication complexity and information-theoretic arguments. One recommended reference book is the lecturer's monograph Sketching as a Tool for Numerical Linear Algebra. Slide notes for about half the course are available here: Videos from a previous course I taught on the linear algebra, l1, and weighted slides are available here: videos . Note that mine start on 27-02-2017. The rest of the material will be based on the lecturer's experience and related research papers or surveys. Materials from the following related courses might be useful in various parts of the course: Intended audience: The course is indended for both graduate students and advanced undegraduate students with mathematical maturity and comfort with algorithms, discrete probability, and linear algebra. No other prerequisites are required. Maintained by David Woodruff
{"url":"http://www.cs.cmu.edu/afs/cs/user/dwoodruf/www/teaching/15859-fall17/index.html","timestamp":"2024-11-06T06:05:34Z","content_type":"text/html","content_length":"14779","record_id":"<urn:uuid:f522390b-1259-44e1-a72d-9f8476948dca>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00771.warc.gz"}
13/01/2021 – Preporuka za danas 13/01/2021 – Preporuka za danas “Machine learning seems to have settled down into ~ 1000 algorithms. Can’t we simply automate the job of a data scientist by just trying them all on any particular case and retaining the best performing one?” Some of the interesting answers: “I am afraid you do not understand what the title “Data scientist” entails. There are 2 main branches in data science/machine learning at the moment: software development (which is also called data science since you are using machine learning frameworks/scalable solutions for these algorithms), and actual data science. …” “Where did this 1000 number come from? There are an almost infinite arrangements of valid networks for every problem, and only a few dozen prominent types of techniques to apply. Neither is near 1000 nor are they things you can cycle through and just try. Let’s try something simpler, say the preparation of data for processing. You need to normalize it. This can be rearranging columns or processing values into a different form among many tasks. Just those two things are simple but we don’t have a way to just try all the options and see what works. If I need to compute the speed between two measurements do I just let the computer randomly choose between operations and inputs until it hits upon the right one? No, of course not…” “A data scientist doesn’t (or definitely shouldn’t) just throw all the existing algorithms at a problem to see which one sticks. A data scientist’s job is to create understanding out of raw data… A lot of that cannot be automated quite so easily. Throwing a classification algorithm at a regression problem is not going to work. If you have structured data (for example, I’m currently dealing with data coming from different sessions of buses; each point is a location and time stamp, together with some more information; just throwing a random machine learning algorithm at that will not understand that it needs to look at individual sessions separately and treat them as sequential data)” Yes and No: It seems many problems arising from different fields of study like speech, image, text, music, control etc., Can be solved using any of the “standard” algorithms. These standard algorithms include random forests, gradient boosting, Monte Carlo…and the list goes on. • When I say “Yes”, even thought the internal workings of these algorithms are different, they still have a common objective. So, if you have well defined objective for your problem, you can iterate over all possible ML models and obtain more accurate and precise predictions. This works fine if you are only concerned about prediction accuracy. If you would like to infer something about the variables involved or model itself. This bruteforce approach is not going to help. • Now I come to “No” part of my answer. As I mentioned above, accuracy is not the always the king. The ML have been very successful in providing better predictions. The ML is still in it’s infancy, when compared to more matured subjects like mathematical statistics or physics. No general framework has been identified so far, which explains why some of these algorithms are exceptionally good at solving one particular class of problems while others can’t.” “This is called ‘autoML’ and already exists. It’s also being developed by every major cloud provider. If this was a data scientists job, then they’d be very scared. However, it’s not. Trying out different models is fun and fairly trivial in difficulty once the data is ready to go. The hard part of the job is everything around finding the best model. Finding data, pipelining it, cleaning it, validating, wrangling it to the best functional form, mapping to reduce dimensionality, choosing a model that minimises the bias variance tradeoff while still running in an acceptable amount of time, understanding your outputs, translating them to a business solution, putting the power of the tool in the right persons hands, etc. Etc. Etc. There are so many other considerations that form a data scientists job. Not to mention the fact that autoML could only work for supervised learning techniques and would not help with unsupervised or reinforcement techniques. AutoML is super cool but it’s only replacing a very small part of any given data science solution. (Also, lots of data solutions don’t even involve modelling, for instance visualisation or dashboarding projects.)” Pročitajte više
{"url":"https://vukmirovic.rs/recommendations/13-01-2021-preporuka-za-danas/","timestamp":"2024-11-13T09:32:51Z","content_type":"text/html","content_length":"46983","record_id":"<urn:uuid:d9185b96-2378-47ae-b8a0-eab69667f9ce>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00693.warc.gz"}
Quantitative Analysis Task A Quantitative Analysis Task is a data-based analysis that requires a quantitative model. • (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/Quantitative_research Retrieved:2015-11-8. □ In natural sciences and social sciences, quantitative research is the systematic empirical investigation of observable phenomena via statistical, mathematical or computational techniques. The objective of quantitative research is to develop and employ mathematical models, theories and/or hypotheses pertaining to phenomena. The process of measurement is central to quantitative research because it provides the fundamental connection between empirical observation and mathematical expression of quantitative relationships. Quantitative data is any data that is in numerical form such as statistics, percentages, etc. The researcher analyzes the data with the help of statistics. The researcher is hoping the numbers will yield an unbiased result that can be generalized to some larger population. Qualitative research, on the other hand, asks broad questions and collects word data from phenomena or participants. The researcher looks for themes and describes the information in themes and patterns exclusive to that set of participants. In social sciences, quantitative research is widely used in psychology, economics, demography sociology, marketing, community health, health & human development, gender and political science, and less frequently in anthropology and history. Research in mathematical sciences such as physics is also 'quantitative' by definition, though this use of the term differs in context. In the social sciences, the term relates to empirical methods, originating in both philosophical positivism and the history of statistics, which contrast with qualitative research methods. Qualitative methods produce information only on the particular cases studied, and any more general conclusions are only hypotheses. Quantitative methods can be used to verify which of such hypotheses are true. A comprehensive analysis of 1274 articles published in the top two American sociology journals between 1935 and 2005 found that roughly two thirds of these articles used quantitative methods.
{"url":"https://www.gabormelli.com/RKB/Quantitative_Analysis","timestamp":"2024-11-11T04:08:51Z","content_type":"text/html","content_length":"42824","record_id":"<urn:uuid:498a87a1-d7ee-4e48-9c51-9489d611c7d6>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00387.warc.gz"}
Verifying security protocols (Rust) Verifying security protocols (Rust) Suitable for Sigma protocols are a particularly simple and efficient kind of zero-knowledge proof and have seen wide deployment; they remain a leading kind of proof both in terms of simplicity and deployment but recent advances in succinct zero-knowledge proofs offer greater efficiency. The first efficient sigma protocol was introduced by Schnorr [2], several years before the class was defined. You can read more about sigma protocols in chapter 5 of this book [1]. To give you some insight into sigma protocols, we briefly discuss the most famous sigma protocol, the Schnorr protocol [2]. Given some public input (G, g, q, h) where G is a cyclic group of prime order q, and g and h are two generators of the group G, the prover claims that she knows a witness w for the statement h = g^w; the existence of such a w is immediate because g generates the group. However, does the prover know the witness w? In order to convince the verifier, the prover and the verifier do the following: 1. the prover picks a random number u, computes c = g^u, and sends c to the verifier 2. the verifier picks a random challenge e and sends it to the prover 3. the prover computes t = u + e ∗ w and sends t to the verifier The verifier accepts if g^t = c ∗ h^e, otherwise rejects. In this project, the goal is to use session types to capture the communication between the Prove and the Verifier. It is a very simple binary session type with three messages. Our first step will be to encode the Schnorr protocol, described above, in NuScribble, generate Rust API fron NuScribble encoding, and fill the rest of the cryptographic code. Later, we will develop the Parallel composition, AND composition, OR composition, and NEQ composition of sigma protocols (see the chapter 5 of [1]) [1] https://www.win.tue.nl/~berry/2WC13/LectureNotes.pdf [2] https://link.springer.com/article/10.1007/BF00196725
{"url":"https://www.cs.ox.ac.uk/teaching/studentprojects/898.html","timestamp":"2024-11-12T20:37:07Z","content_type":"text/html","content_length":"25772","record_id":"<urn:uuid:a7d820e7-9c06-4776-9076-fc64e3e93d81>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00555.warc.gz"}
A didactically motivated PLS prediction algorithm “A didactically motivated PLS prediction algorithm” Authors: Rolf Ergon and Kim H. Esbensen, Affiliation: Telemark University College Reference: 2001, Vol 22, No 3, pp. 131-139. Keywords: Latent variables models, PLS prediction, Kalman filtering Abstract: The intention of this paper is to develop an easily understood PLS prediction algorithm, especially for the control community. The algorithm is based on an explicit latent variables model, and is otherwise a combination of the previously published Martens and Helland algorithms. A didactic connection to Kalman filtering theory is provided for a methodological overview. PDF (900 Kb) DOI: 10.4173/mic.2001.3.1 DOI forward links to this article: [1] BELSLEY, D.A. (1991). Conditioning Diagnostics: Collinearity and Weak Data in Regression, Wiley New York. [2] BERNTSEN, H. (1988). Utvidet Kalmanfilter og multivariabel kalibrering, Report STF48 A88019, SINTEF, Trondheim, Norway. [3] DI RUSCIO, D. (2000). A weighted view on the partial least squares algorithm, Automatica 36, pp. 831-850 doi:10.1016/S0005-1098(99)00210-1 [4] ERGON, R. (1998). Dynamic system multivariate calibration by system identification methods, Modeling, Identification and Control, Vol. 19, No. 2, pp. 77-97 doi:10.4173/mic.1998.2.2 [5] ERGON, R. (1999). Dynamic System Multivariate Calibration for Optimal Primary Output Estimation, Ph.D. thesis, The Norwegian University of Science and Technology and Telemark University College, [6] ERGON, R. ESBENSEN, K.H. (2001). Static PLSR optimization based on Kalman filtering theory and noise covariance estimation, 7th Scandinavian Symposium on Chemometrics, Copenhagen. [7] ESBENSEN, K. (2000). Multivariate Data Analysis in practice, Camo ASA, Trondheim, Norway. [8] GREWAL, M.S. ANDREWS, A.P. (1993). Kalman Filtering: Theory and Practice, Prentice Hall, New Jersey. [9] ESBENSEN, K. HUANG, J. (2001). Principles of proper validation, In preparation for Journal of Chemometrics. [10] HELLAND, I.S. (1988). On the structure of partial least squares regression, Communications in Statistics, 1.2, pp. 581-607. [11] HØSKULDSSON, A. (1996). Prediction Methods in Science and Technology, Thor Publishing, Copenhagen. [12] JOHNSON, R.A. WICHERN, D.W. (1992). Applied Multivariate Statistical Analysis, Prentice Hall, New Jersey. [13] MARTENS, H. NÆS, T. (1989). Multivariate Calibration, John Wiley and Sons, New York. [14] NORRIS, K. H. (1993). Extracting information from spectrophotometric curves, Predicting chemical composition from visible and near-infrared spectra, Proc. IUFost Symp. Food Research and Data Analysis, Sept. 1982, Oslo, Norway.MARTENS and RUSSWORM, eds., Applied Science Publ., 95-113. [15] TICHONOV, A.N. ARSENIN, V.Y. (1977). Solutions of Ill-Posed Problems, V. H. Winston and Sons, Washington, DC. title={{A didactically motivated PLS prediction algorithm}}, author={Ergon, Rolf and Esbensen, Kim H.}, journal={Modeling, Identification and Control}, publisher={Norwegian Society of Automatic Control}
{"url":"https://www.mic-journal.no/ABS/MIC-2001-3-1.asp/","timestamp":"2024-11-11T03:15:43Z","content_type":"text/html","content_length":"74366","record_id":"<urn:uuid:577edaf8-2f53-4814-bf6b-b86e28e0fd04>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00865.warc.gz"}
Blog Archives Constructing a time series plot Author 11/30/2018 Frustrated with a particular MyStatLab/MyMathLab homework 0 Comments problem? No worries! I'm Professor Curtis, and I'm here to help. Howdy! I'm Professor Curtis of Aspire Mountain Academy here with more statistics homework help. Today we're going to learn how to construct a time series plot. Here's our problem statement: The company data represent the percentage of recent high school graduates graduated within 12 months before the given year end who enrolled in college in the fall. Construct a time series plot and comment on any trends. Part 1 OK, to construct the plot, we need actual data. So we're going to click on this link here, and here's our data. Now we're going to put that data into StatCrunch. OK, we've got the dated loaded here in StatCrunch. I'm going to resize this window so we can see everything a lot better. And now making the graph in StatCrunch is pretty easy. I just go up to Graph --> Index Time Plot. In the first area, I'm asked to select the columns that I want to graph, and this is what's represented on the y axis. So you can see here with our answer choices, the percent enrolled is on the y axis. So I'm going to select that there. Then down below, see how there's a format for the x and y axis, and you've got different options that you can go with? We actually want Time because our data is separated into years, so we're going to select Years under Type. And the starting year — the first year of our data is 1989, so I'll put that in. And then each data point proceeds increments of every one year. So we're just going to leave that default value there. And that's all we need to do. I press Compute!, and here is my time series graph. Now I just look for the answer option that best represents what I have here in StatCrunch, and clearly that's going to be answer option A. Excellent! Part 2 Now the second part of the problem asks us to comment on any trends. There's three answers that we have to choose from, so let's see what we're looking at here. Generally, if you look at the graph here, the trend starts out low, comes up high, and there's a down spot here and then it comes back up high again. So if you look at all of the data points and try to imagine a line of best fit going through all of the data, that line would be going upward as we go from left to right. So the general trend, even though there's some lows is increasing from left to right or increasing with time because from left to right we're increasing with time. So let's see which answer option best matches that. It looks like answer option A. Well done! And that's how we do it at Aspire Mountain Academy. Be sure to leave your comments below and let us know how good a job we did or how we can improve. And if your stats teacher is boring or just doesn't want to help you learn stats, go to aspiremountainacademy.com, where you can learn more about accessing our lecture videos or provide feedback on what you'd like to see. Thanks for watching! We'll see you in the next video. 0 Comments Interpreting a frequency table 1 Comment Howdy! I'm Professor Curtis of Aspire Mountain Academy here with more statistics homework help. Today we're going to learn how to interpret a frequency table. Here's our problem statement: Refer to the table summarizing service times (seconds) of dinners at a fast food restaurant. How many individuals are included in the summary? Is it possible to identify the exact values of all the original service times? Part 1 Okay, so the first part of his problem is asking us for the total number of individuals included in the summary. We have our frequency table here, so to get the total number, we just add up the number that are in each of the different categories or classes. So to do that, I'm going to whip out my calculator and just add in the frequency counts for each of the categories together and give me the total that's in the summary. I put my answer here in the answer field. Part 2 And now the second part of the problem asks, "Is it possible to identify the exact values of all the original surface times?" Well, the only information we have is the frequency counts that are in each category or class. So for this first category where we've got 60 to 119 seconds, we have a frequency count of 7. That tells us seven of the times in the total dataset are somewhere between 60 and 119. We don't know exactly where they are. We don't exactly what they are. All that we know is that there are seven of them within this range. So all seven of those data points could be 60. They could all be 65. They could all be 70. They could all be a hundred. Or hey, maybe there's three of them that are 70 and four of them are 100. That's another possibility. I mean, if you start thinking about it, you see there are endless numbers of possibilities for where these seven data points could lie within this range. So without knowing more information about the specific data point, we'd have no idea where those seven points are. So no, the frequency distribution doesn't tell us the exact value; it could be any value within those class limits. And that's going to be this answer here. Nice work! And that's how we do it at Aspire Mountain Academy. Be sure to leave your comments below and let us know how good a job we did or how we can improve. And if your stats teacher is boring or just doesn't want to help you learn stats, go to aspiremountainacademy.com where you can learn more about accessing our lecture videos or provide feedback on what you'd like to see. Thanks for watching. We'll see you in the next video. 1 Comment Howdy! I'm Professor Curtis of Aspire Mountain Academy here with more statistics homework help. Today, we're going to learn how to find and use expected frequency for goodness of fit hypothesis testing. Here's our problem statement: Refer to the data in the accompanying table for the heights of females. Complete parts (a) through (d) below. Part A OK, Part A is asking us to “enter the observed frequencies in the table below.” So here they’ve got different categories or classes for height, and then we’re asked to fill in the different frequencies. If we look at our data in our table here, notice how we’ve got females mixed in with males. So the gender variable is a dummy variable where you’ve basically got two options: One option is a zero and the other is a one. Here the ones are males, so the zeros must be females. We’ve got to sort through all this data to get just the heights for the females; we weren’t asked for the heights of the males, just the females. Notice also here the height data is actually unsorted itself too. So we’ve got two different sortings to do, and so to do that, I want to open the data in Excel. So the thing is I want to actually use Excel because we could do it in StatCrunch but StatCrunch is really clunky, and especially when it comes to sorting data. The sort feature in StatCrunch only lets you sort one level at a time whereas with Excel, it will let you sort multiple levels at the same time. And that’s what we want because it makes our job a little easier. So I’ve already pre-loaded the data here into Excel. And what I want to do now is actually sort this data out. So to do that, I’m going to come up to menu here — I’m coming off screen a little bit so you can’t see, but I’m selecting Data. And then I want to select Sort. And then here in the sort dialog box, I first want to sort by gender so we can get all the males out of the way. And then I’m going to add a level so that within the females I’m going to actually sort by height. This will actually help us to count to get the frequencies that we need to fill in our table here for our answer fields. So I hit OK, and now everything is automatically sorted. The other thing that is nice about Excel is that it makes counting really easy. And that’s all we’re doing with frequency is we’re getting counts of measurements that fall within each of these different classes or categories. So we want to count the number of data points that are less than 155.15; that’s our first category or class here. So I’m going to select that first cell with my data point here in my data, and then I’m going to scroll down to where I get the — let’s see, 155.15. So 155.15 is going to be every data point up to this one. So I’m going to hold down the Shift key on my keyboard and press the left key on my mouse. So now I’ve selected all those data points that are less than 155.15. And if I look down here at the bottom of my Excel window, I see that the count here is 20. So that made the counting super easy for me. I just put in a 20 there. And now I’m going to do the same thing for each of the different other classes, so I’m going to select the next cell here, and go down to 161.75. So I scroll down to . . . 161.75 would be there, and 41 is the count. Now, let’s see, 168.35. That would be up to there. The count is 34. “Greater than 168.35" — so this is the last category, so I’m going to go up to the last female data point, which is this one right here. Beyond that you get all the male data points. So this is my last category count — 19. Excellent! Part B Now, Part B says, “Assuming a normal distribution with mean and standard deviation given by the sample mean and standard deviation, find the probability of a randomly selected height belonging to each class.” So the probabilities are going to come out of our distribution calculator in StatCrunch; that’s the easiest way to get this. We want a normal distribution, and the mean and standard deviation are coming from our sample data itself. So first I’m going to get the sample mean and standard deviation and put those values into StatCrunch. To do that, the first thing I’m going to do is get rid of all the male data points here because we don’t need them. So I’m just going to scroll down here, select the row, delete all that. Now down here under the height column, I’m going to put in an AVERAGE function, I’m going to select all those data points, then close my parenthesis — there’s my average. I’ll put the standard deviation just below it, and there’s my standard deviation. So this is what I need to put into StatCrunch. So the easiest way for me to get into StatCrunch is just to put my data in, although once I get my data here into StatCrunch, see, the first thing I’m going to do is get rid of my data because I don’t need the data; I just need StatCrunch. So let’s move this down so we can see a little bit more what we’re doing. Alright, so here we’ve got the data in StatCrunch, and I’m just going to clear that out, and I’m going to clear you out. So now I just want my distribution calculator. I want the Normal distribution, and the mean and standard deviation are going to come from Excel. So if I move this over here, I can stick in my mean value that I calculated in Excel and the standard deviation value, also from Excel. Notice I’m typing in all the numbers I have. OK, that’s great. So now I’ve got this, and I can get rid of that. And to get the probability, I want less than 155.15, so less than is here. I just need to put in this random variable 155.15 — there’s my probability. I’m asked to round to four decimal points. And I’m going to do the same thing for each of the other four categories. The next one is between two values, so I’m going to select the Between option in StatCrunch. And then here we’re going to select 155.15. Here we’re going to select 161.75. There’s my probability. And then I just do the same thing for the columns that remain, again rounding to four decimal places. And I want greater than, so I go back to my Standard option, change this to greater than, and this one’s 168.35. And there’s my final probability. Good job! Part C Now, Part C says, “Using the probabilities found in Part B, find the expected frequency for each category.” This is pretty simple. All we have to do is get the total number of frequencies and then multiply it by each respective probability for each class. So I can add these numbers up, or if I want to be lazy — yeah, I want to be lazy! — so I’m going to go back to Excel, recognize that this first row is taken up with the column headings, so I’ve got 115 minus the one for the column headings is 114. So my sum is 114. If you want you can add these four numbers up, you’ll get the same thing. So 114 is what I want to multiply by each one of these respective probabilities to get the expected frequency counts. So I’ll pull out my handy dandy calculator here, and let’s move you down a little bit. OK, so, we have 114 times the first probability for the first category here is 0.1971. So there’s my expected frequency for the first class or category. And I just do the same thing with the numbers that remain. So 114 times the next probability gives me the next expected frequency. And I’m just going to finish this out here. Oops! That’s the wrong number. There’s the number I want. Excellent! Part D Now Part D asks for a hypothesis test, and there’s different parts to this, so let’s take a look. The first section in Part D says, “Identify the null and alternative hypothesis for this test.” For goodness of fit, it’s always going to be the same thing. Your null hypothesis will be everything’s equal. The alternative is going to be at least one of them is different. But you’re not just looking at one part; you’re looking at both parts together. There’s a part in Part A, and then there’s a corresponding part in Part C, because you’re looking at observed frequencies and expected frequencies. And so what we’re saying is that the observed and the expected should be the same. That’s the null hypothesis. And then at least one of those categories, it’s not going to be the same; it’s going to be different. So if I look back here at my answer options, I’m seeing . . . this one — answer option D. So for each class, the answer from Part A equals the one from Part C. And then the alternative is for at least one of them the answer is not equal. So that’s what I want to select. Good job! Now the next part asks for the test statistic. This part of the problem typically drives students bonkers until you understand that, the way they give this out, it’s easy to get it into StatCrunch, but you can’t just stick the data in. The data is in the wrong format because the data is just the raw data. StatCrunch needs, to do the goodness of fit test, is frequency counts. So you’ve got to put in the observed and the expected frequency counts. And you’ve already tabulated those up. Here’s your observed frequency counts, and here’s your expected frequency counts. So all I need to so is go back into StatCrunch and then put those numbers in, and then I can run my goodness of fit test. So here’s my StatCrunch window back, and I’m going to clear this distribution calculator out. So here in the first column, I’m just going to label this Observed. And then the next one I’m going to label Expected, so I can tell them apart. Now I’m going to come up here, and I’m going to copy these numbers from Part A up here in my Observed column. So I’ve got — whoops! — 20, 41, 34, and 19. We’re going to do the same thing for my Expected column. Make sure they’re in the same order so your answer can come out right. Now I’ve got the data here in StatCrunch. This is the frequency counts. This is what we need for goodness of fit testing. And it’s really easy. Come up to Stat –> Goodness-of-fit –> Chi-Square Test. The observed is the Observed; the expected, Expected; come down here and hit Compute!, e viola! There’s out test statistic right there in the results window. So it’s the next to last number there in that first table. How many decimal places do I want? Three? That’s going to be 0.764. Nice work! “Identify the P-value.” Well, the P-value is right there next to the test statistic. It again wants three decimal places. Excellent! And now “state the final conclusion that addresses the original claim.” Well, our P-value is going ot be well above any significance level that we’re going to want to test for. Here our significance level is 1%. Here we’ve got 85.8%, so we’re definitely outside the region of rejection, therefore we fail to reject. Whenever we fail to reject, there is not sufficient evidence. Well done! And that's how we do it at Aspire Mountain Academy. Feel free to leave your comments below and let us know how good a job we did or how we can improve. And if your stats teacher is just boring or doesn't want to help you learn stats, then go to aspiremountainacademy.com, where you can learn more about accessing our lecture videos or provide feedback on what you’d like to see. Thanks for watching! We’ll see you in the next video! 1 Comment Howdy! I'm Professor Curtis of Aspire Mountain Academy here with more statistics homework help. Today, we're going to learn how to use an ANOVA table for hypothesis testing. Here's our problem statement: A sample of colored candies was obtained to determine the weights of different colors. The ANOVA table is shown below. It is known that the population distribution are approximately normal and the variances do not differ greatly. Use a 0.025 significance level to test that claim that the mean weight of different colored candies is the same. If the candy maker wants the different color populations to have the same mean weight, do these results suggest that the company has a problem requiring corrective action? Source DF SS MS Test Stat F Critical F P-value Treatment 7 0.021 0.003 0.7555 2.4502 0.6260 Error 80 0.320 0.004 Total 87 0.341 Part 1 OK, the first part of this problem asks, “Should the null hypothesis that all the colors have the same mean weight be rejected?” Well, we have the ANOVA table here, and notice how here at the end we have our P-value, so we can use this and compare this with our significance level and determine the result of the test. So a P-value of 0.6260 is definitely greater than our significance level of 0.025. Therefore, we can’t fit the area of the P-value into the area of the significance level, and we are therefore outside the area of rejection. Therefore we are going to fail the null hypothesis. So we should not reject the null hypothesis because the P-value is going to be greater than our significance level. Excellent! Part 2 Now, the second part of this problem asks, “Does the company have a problem requiring corrective action?” Well, here in the problem statement it says that “the candy maker wants the different color populations to have the same mean weight.” That is the null hypothesis, that all of the colors of the same mean weight. We failed to reject the null hypothesis, which means it could be true. And if it’s true, then the candy maker is getting what the candy maker wants. And so therefore there is no problem requiring corrective action. So the answer is going to be No, no corrective action is required because — let’s see here. It is likely that the candies do not have equal mean weights. No, it is likely that they do. So we’re going to select answer A, even though it’s got this awkward double negative — “not likely that the candies do not have equal mean weights”! That’s like saying, yeah, because they do have the same weight. Excellent! And that's how we do it at Aspire Mountain Academy. Feel free to leave your comments below and let us know how good a job we did or how we can improve. And if your stats teacher is just boring or doesn't want to help you learn stats, then go to aspiremountainacademy.com, where you can learn more about accessing our lecture videos or provide feedback on what you’d like to see. Thanks for watching! We’ll see you in the next video! 0 Comments Howdy! I'm Professor Curtis of Aspire Mountain Academy here with more statistics homework help. Today, we're going to learn how to use goodness of fit for hypothesis testing of the best day for quality family time. Here's our problem statement: A random sample of 773 subjects was asked to identify the day of the week that is best for quality family time. Consider the claim that the days of the week are selected with a uniform distribution so that all days have the same chance of being selected. The table below shows goodness-of-fit test results from the claim and data from the study. Test that claim. Part 1 OK, the first part of this problem is asking us to determine the null and alternative hypotheses. For goodness of fit testing, that’s pretty much going to be the same thing every time. The null hypothesis is going to be that everything is the same. So in this case, all days of the week have an equal chance of being selected. The alternative hypothesis will always be that at least one of those will be different. So at least one day of the week has a different chance of being selected. Good job! Part 2 Identify the test statistic. Well, we work so many problems that by the time we get to Chapter 11, we’re pretty much in the habit of OK let’s get some data or some numbers, put them in StatCrunch, let StatCrunch chew some numbers, and spit out an answer. But the answer that we’re looking for is already given to us here just below the problem statement. It asks for the test statistic, and so here is our test statistic. There’s a number: 3021.822. So we just put that number here in the blank. Excellent! Part 3 This next part of the problem is exactly the same thing. It asks us to identify the critical value. The critical value is, again, listed up here in the results from some technology display that was already done by somebody. So all we have to do is copy the number over. Fantastic! Part 4 And now the last part of the problem asks us to state the conclusion. In this case, we’re going to compare the test statistic and the critical value. Well, here’s the critical value, which marks the boundary region in the tail of our distribution that is the critical region or region of rejection. Here’s our test statistic. It’s well within the right tail of our distribution, and so therefore we are going to be inside the region of rejection. Therefore we reject the null hypothesis. Whenever we reject the null hypothesis, there is sufficient evidence. And so, because the null hypothesis says everything is the same and we’re rejecting it, we are by default “accepting” the alternative, which says that at least one of the days is different, so it does not appear that all days have the same chance of being selected. Fantastic! And that's how we do it at Aspire Mountain Academy. Feel free to leave your comments below and let us know how good a job we did or how we can improve. And if your stats teacher is just boring or doesn't want to help you learn stats, then go to aspiremountainacademy.com, where you can learn more about accessing our lecture videos or provide feedback on what you’d like to see. Thanks for watching! We’ll see you in the next video! 0 Comments Download your free copy of the Stat 101 Nonlinear Regression Reference Sheet (which is used in the video) by clicking on the icon at right. Howdy! I'm Professor Curtis of Aspire Mountain Academy here with more statistics homework help. Today, we're going to learn how to find the best nonlinear regression model for stock market index values. Here's our problem statement: Listed below are the annual high values, y, of a stock market index for each year beginning with 1990. Let x represent the year, with 1990 coded as x = 1, 1991 coded as x = 2, and so on. Construct a scatterplot and identify the mathematical model that bets fits the given data. Use the best model to predict the annual high value of the stock market index for the year 2007. Is the predicted value close to the actual value of 11,655? Part 1 OK, so first we’re asked to construct a scatterplot. To do that, we need to make the actual model itself. Notice here in the problem statement how we’re using coded years. So we have to use coded years to make our model. And the data set that they give us, as you can see here, doesn’t have coded years. So we have to actually make that transformation. So let’s go ahead and do that first. I click this icon here so that I dump my data into StatCrunch. So here we are in StatCrunch. I’m going to resize this window so we can see everything a little bit better. Excellent! Now we’re done with you. So here in StatCrunch, we can transform these values into coded years. And to do that, I’m going to go up here to Data –> Compute –> Expression. I’m going to build my expression. And then notice here in the problem statement is says that 1990 is coded as x = 1, 1991 is coded as x = 2, so we’re basically, you know, saying that 1989, which is the year before 1990, is going to be x = 0. So 1989 is our zero year. So that’s what we’re going to need to subtract from each of our year values in order to make coded years. So I select the column for the years and add it to my expression, and then I’m going ot subtract out the zero year (1989), press Okay. You can label the column whatever you want. I typically leave this blank because the default is to go and label the column with the actual expression that was used to transform the data. And I like that; I like knowing what the data is and what the data came from, so I just go ahead and leave that blank. I press Compute! So now I got a new column here with the coded years. Now I’m ready to make my model. And the way they’re intending for you to work this is you’re going to use this data to make each of the general types of nonlinear models that you’re talking about in this section. That’s like five different models that you have to make! And then you have to compare P-values and adjusted R-squared values. And, you know, if you have to do it that way, then I guess you could do it that way to figure out what the best model is. But I find that it’s much easier if I just use a reference sheet. So I’m going to show you here a little tool that I developed. This is a reference sheet that you can use for answering these nonlinear regression equation questions that you get on your assignment, and it’s basically two tables. So the first table up here tells us what model we need to make, and the second table tells us, you know, how to manipulate the options in StatCrunch so we can get the numbers we need to put in our answer fields. So up here at the top, we look to see what the general model is going to be. And you can actually get this reference sheet if you go to the website and you can look to where the blog post is. If you’re watching this on YouTube, you know, just click the link on the description, and it’ll take you to the blog post there on the website, and then down below the viewing window there for the video you can see a link to download for free this actual reference sheet. Before you use it, if you . . . if you’re not in my class, then you’re probably not going to be using this in a testing situation, in which case, you’re just going to have to work the problem so many times that you understand that, when you see this type of application, it means you make this type of model. And you’re going to have to work the problem so many times that you remember the steps. That’s all I can give you. Now if you’re in my class, yeah, I’ll let you use this on a test because, I mean, the class isn’t about trying to make you expert model makers. It’s just giving you kind of a brief look at, understanding, cursory look at what’s the process for model making just to give you that general sense of appreciation for how it’s done. You know, I don’t mind you using a reference sheet like this on a test if you’re one of my students. If you’re somebody else’s student, well, you’re probably not going to get it. But at least this will help you work your homework problems, am I right? So the first thing we do is we look at this first table, and we’re looking for the application here in this area that matches what we’re looking at in the problem statement. So if we go back to our problem statement here, we can see that we’re talking about a stock market index. So I’m going to go back to this reference sheet, and I’m going to look for where it says “stock market index.” And I can look through all the different applications here, and I see it right here — stock market index. So that tells me I need to make a quadratic model. So I don’t have to make all five of these models to know that the quadratic one is the best. That’s really handy. And then of course the general form as you can see is listed here. Now I can go down to the second table, which is the data transformation table. This tells me how to use StatCrunch to get the answer I need to put in my answer fields. So, again, we’re making the quadratic model. Here’s the general form that we want to use. To get there in StatCrunch, this is the regression option that we want to select. So we want Polynomial, so in StatCrunch, I’m going to go up to Stat –> Regression –> Polynomial (because that’s what the table told me to select). And here I’m going to select my x- and y-variables. Remember to use the coded years for your X. I take the Y. Poly order here is 2; that’s what the table here is telling me to say. It says in the option window, I want to make sure — there’s nothing I need to do, no change I need to make in the options window, but it says to make sure that Poly order equals 2. And we see that it does. So we got everything we need, so we hit Compute! And out comes our results window. We’re looking for the scatterplot, so if I hit this little arrow over here in the corner, there’s my scatterplot with my line of best fit. Wow, that looks really great. So now I just look here at my points, and it’s pretty obvious that answer option A is going to be the one that matches. If I want, I can use these options here to blow up the graph, and make sure it looks similar. We’re looking OK. So answer option A is going to be what I select. Nice work! Part 2 Now the next part asks for the equation for the best model. We know it’s the quadratic equation. But if you come back here and look, see the general form here? So now I want to pick the answer option in StatCrunch that matches this general form: a-x-squared plus b-x plus c. So as I look at my answer options, that’s going to be answer option A that matches the general form. Now I selected the right one. To get the answers that I put here in my coefficients, again I go back to my table, and it says in the results window, it says, “a equals x-squared, b equals x, c equals Intercept.” So this a-b-c matches what you see over here in the general form: a, b, and c. And notice that matches the order of the answer fields that I need to put in here in my answer. So it’s going to be a, b, and c. And those numbers, it says, comes out of the results window. This is from the parameters table: x-squared, x, and intercept. So I come back here to StatCrunch, and notice I got here in my parameters table x-squared, x, and intercept. So these numbers here are what I need to put in my answer fields here. I’m asked to round to three decimal places. So here the first value is going to be this x-squared value here; that’s going to be 1-2-5-point — rounded to three decimal places, that’s going to be 3-5-2. Next, notice we have a negative sign here, so I’m going to have to carry that one through — 44-4-point-9-6-6. And then the last number coming up here — 3-4-2 — excuse me, 3-2-point-9-5. Good job! Part 3 And now the last part of the question asks to use the best model to predict the high value for the stock market index in the year 2007. I can make predictions with the model. I can actually, you know, look at this equation, and actually write it out, and punch it out on my calculator, or I can have StatCrunch do it for me. So go back to your options window, scroll down here and see where it says “Prediction of Y.” You put in a value for X, and it will calculate that for you in the regression equation. But remember — you used coded years for your model. This is why I hate coded years, because in order to use the model, you have to put in a coded year. So we can’t just put in 2007. We have to change that to a coded year. And we do that by subtracting out the zero year. So here in my calculator, I take 2007, subtract out my zero year (which was 1989), and I get 18. So 18 is the number I want to stick in here. I come down here and hit Compute! And then scroll down here. And down here at the very bottom, I see my predicted value 36000, which is a long ways away from 11,655. So that’s much higher than that. So, no, it’s not close at all to the actual value. We want either A or B. A says “dramatically greater.” B says “dramatically lower.” A is going to be what we want. And I stick the value that we get in here rounded to the nearest whole number. Nice work! And that's how we do it at Aspire Mountain Academy. Feel free to leave your comments below and let us know how good a job we did or how we can improve. And if your stats teacher is just boring or doesn't want to help you learn stats, then go to aspiremountainacademy.com, where you can learn more about accessing our lecture videos or provide feedback on what you’d like to see. Thanks for watching! We’ll see you in the next video! 0 Comments Howdy! I'm Professor Curtis of Aspire Mountain Academy here with more statistics homework help. Today, we're going to learn how to find the P-value given the test statistic. Here's our problem statement: Use technology to find the P-value for the hypothesis test described below. The claim is that, for a smartphone carrier’s data speeds at airports, the mean is mu = 18.00 Mbps. The sample size is n = 29, and the test statistic is t = 2.074. OK, finding the P-value here is really easy if you understand one simple concept. And that is that the P-value is the area in the tail of the distribution bounded by the test statistic. And that’s why the test statistic is given here, because it provides the boundary for that area in our distribution. But which distribution are we going to be using? Well, look at your test statistic. Your test statistic is a t-score. That means we’re going to be using the Student-t distribution. So I’m going to call up StatCrunch here, and inside StatCrunch, I’m going to go to Stat –> Calculators –> T because I want the Student-t distribution. Now here’s my Student-t calculator. The degrees of freedom is one less than the sample size, and that’s why they gave us the sample size here. In this case, it’s 29, so our degrees of freedom will be one less than that, which is 28. And then we need to get this inequality sign here right, and that’s got to match our alternative hypothesis. Well, to get the alternative hypothesis, we have to look at the claim. The claim here is that mean value equals 18. Well, equality by definition belongs with the null hypothesis, so we can’t adopt the claim as the alternative hypothesis, which means we have to take the complement of this. The complement of being equal to is being not equal to, and not equal to means we’re going to have a two-tailed test. So I’m going to come up here in my distribution calculator and select the Between option, because the P-value is actually split between the left and right tail of my distribution. And now I’ve got two test statistics: One is going to be positive, and one is going to be negative. So I’m going to put those values in here. And now I’ve got everything that I need, I go ahead and hit Compute! and out comes the area in between the tails. Remember that in StatCrunch, this Between option is calculating the area in between the tails. But the P-value is the area of the tails, so I have to take the complement of this area that’s between the tails to get the area of the tails. So I call up my little calculator here, take 1 minus this value here, and there is my P-value. I’m asked to round to three decimal places. Good job! And that's how we do it at Aspire Mountain Academy. Feel free to leave your comments below and let us know how good a job we did or how we can improve. And if your stats teacher is just boring or doesn't want to help you learn stats, then go to aspiremountainacademy.com, where you can learn more about accessing our lecture videos or provide feedback on what you’d like to see. Thanks for watching! We’ll see you in the next video! 12 Comments Howdy! I'm Professor Curtis of Aspire Mountain Academy here with more statistics homework help. Today, we're going to learn how to evaluate a multiple linear regression equation based on a given technology output. Here's our problem statement: Consider the correlation between heights of fathers and mothers and the heights of their sons. Refer to the accompanying technology output. Should the multiple regression equation be used for predicting the height of a son based on the height of his father and mother? Why or why not? OK, the first thing we’re going to do is take a look at this technology output. So I’m going to click on this icon here, and out comes the technology output. It looks very similar to what you would see if you were actually making the model in StatCrunch. The advantage of this is that the model has already been made, so all we have to do is evaluate the output to see if it’s something we want to use or not. There are two main things you want to look at. The first is the P-value. And when you’re looking at the P-value, don’t look over here at the parameter estimates table. You want to look at the P-value not of an individual parameter but of the model as a whole. And the P-value for the model as a whole is found here in the ANOVA table. So we’ve got a P-value that is practically zero. It’s hard to get a P-value better than that, so the P-value looks absolutely The other thing you want to check is the R-squared, or more appropriately, the adjusted R-squared values. And here we look down here at the bottom of our output and see that our adjusted R-squared value (0.3552) is not something that I would consider to be all that grand. However, the adjusted R-squared value is most useful for comparing models between each other. We’ve only got one model that we’re looking at here. So the main thing that we want to focus on is the P-value because we’ve only got one model that we’re evaluating. So the P-value itself looked pretty good, so let’s go through our answer options to see which of these answer options best matches what we saw with the technology output. We’re definitely — we’re definitely going to do this, so the only answer option that has a particular “Yes” to it is going to be answer option B. But before we select that, let’s go through the other answer options to make sure that we don’t want to select them. Answer option A says “No, because the P-value for the Intercept is not very low.” Well, we’re not looking at P-values for a particular model parameter. We want the P-value for the model as a whole, so that’s not going to work. Answer option C says, “No, because the R-squared and adjusted R-squared values are not very high.” And while that’s very true, this answer option doesn’t say anything about the P-value. And so it’s — the P-value here is what you want, especially when you’ve just one model. So this isn’t going to work for us. Answer option D says, “No, because the P-value for Father is smaller than the P-value for Mother.” Again, these are P-values for individual parameters of the model, and the only one we want to look at is the one for the model as a whole. So this isn’t going to work for us. The answer option we do want is answer option B. Excellent! And that's how we do it at Aspire Mountain Academy. Feel free to leave your comments below and let us know how good a job we did or how we can improve. And if your stats teacher is just boring or doesn't want to help you learn stats, then go to aspiremountainacademy.com, where you can learn more about accessing our lecture videos or provide feedback on what you’d like to see. Thanks for watching! We’ll see you in the next video! 0 Comments Howdy! I'm Professor Curtis of Aspire Mountain Academy here with more statistics homework help. Today, we're going to learn how to find the best regression equation given multiple variables. Here's our problem statement: The accompanying table provides data for tar, nicotine, and carbon monoxide (CO) contents in a certain brand of cigarette. Find the best regression equation for predicting the amount of nicotine in a cigarette. Why is it best? Is the best regression equation a good regression equation for predicting the nicotine content? Why or why not? Part 1 OK, the first part of our problem asks us to find the best regression equation, and notice we’ve got three different answer options to select from. And that’s because we’re looking at three different models. So the first model (answer option A) has just the carbon monoxide content. The second answer option (answer option B) — that model has only the tar content. And the last model (answer option C) has both the tar and the carbon monoxide for variables. So we need to make regression equations for each of these models. And we need to compare values from each of those models. So to make the models, first we need to get the data and dump it into StatCrunch. So to do that, I’m going to click on this icon; this brings up a table with the data. And now I’m going to stick that data into StatCrunch. I’ll resize this window so we can see a little bit better everything that’s going on. And now, to make these first two models, we could go into Stat –> Regression –> Simple Linear. But we know we’re going to have a model with two variables there at the end. So let’s just use the one menu option of going to Stat –> Regression –> Multiple Linear. Here in my options window, I can select my Y-variable. This is what comes out of my regression model, which in this case is the nicotine. And then the first model I want to make has just the carbon monoxide, so I’m going to select that here for my X-variable. There are no interactions with this model. Interactions are where you have more than one variable being multiplied together to make another term in your regression model equation. Here we only have single variables in each individual term, so there’s no variables being multiplied together. And so we don’t have any interactions, so there’s nothing to select here. And these default options here where these boxes are not selected is just fine for us. So we press Compute! And out comes this results window that has the results that we need for evaluating this particular regression model. To help us evaluate the models, what I’ve done is gone to Excel and made a little chart here. So what we can do is copy that information over into Excel for each of the models, and then we can compare in one spot which model is the best and then take the values from that model and stick it into the answer fields that are appropriate for our assignment. So the first thing we’re looking for is the adjusted R-squared value. That’s going to be down here towards the bottom of my results window. The P-value — notice there’s different P-values here in my parameter estimates table. The one that I want, though, is here in the ANOVA table. I want this P-value here in the ANOVA table; that’s the one that I want for the model. And then, just in case we end up selecting this model, so we don’t have to go back and redo all of this, let’s just take the values for the intercept and the slope and stick them here in our table in Excel. Our assignment is asking us to round to three decimal places, so I’m going to take these values out to five decimal places. I want two extra decimal places so that I can avoid rounding errors to put my actual answer into my assignment fields. But I don’t want to incur any rounding errors that come from rounding these values themselves. I don’t want to transcribe the entire number, so in order to avoid that, I just want to shorten this up to transcribe it here. So I’m just going to take two extra decimal places, so that means I want five. There. And there’s the first Now to get the second model — OK, notice what we need for the second model. The second model is we’re just looking at the tar. So I’m going to replace the carbon monoxide with the tar. Instead of going through the menu options again in StatCrunch, I’m just going to come up the Options button here, click on Edit, and then I’m just going to switch from CO to Tar. Hit Compute! And now I’ve got a new model. And I can take those values out. So my adjusted R-squared value is down there at the bottom of the screen. And my P-value I get from my ANOVA table. Notice it says we have less than 0.001; that’s for all practical purposes zero. And then I take my intercept value and my slope. And that’s the second model. Now I’m ready to make the third model. I’m going to go back into my options window to do that. Notice the order in which the variables appear. Tar is first, and then comes CO. So I’m going to put those variables in the same order in my regression model so that when I’m transcribing numbers out, it’ll be easier not to get them confused. And here’s my last model. Adjusted R-squared value goes here. P-value goes there. I’ve got an intercept, slope 0.09596, and the last slope value — notice the negative sign there. OK, so now I’ve got my values here. Now I can bring that over here, and we can compare and see what we’re looking at here. We want a high value for adjusted R-squared and a low value for the P-value. Well, looking at the P-values, answer option A has a significantly higher P-value than the other two options, so we’re going to take that, and we’re just going to cross that off our list. So we’re not going to look at that any more. And now we’re choosing between answer options B and C. They have the same P-value, so we look at the adjusted R-squared value. And answer option C has a significantly higher value for adjusted R-squared. So we’re going to select answer option C. If the adjusted R-squared values were reasonably close together, then we would say that adding in this extra variable doesn’t give you that much more benefit from a higher adjusted R-squared value, so it’s not going to make that better of a model. But this is a ten percentage point difference here; that’s pretty significant. So we’re going to say that answer option C is the one that we’re going to want to select. And if I wanted to highlight that, I could do something like that so I can make sure I get the right numbers out. And then I just transcribe my numbers here. So I want three decimal places. I’ve got them rounded to five so I can avoid rounding errors when I’m putting them here in my answer field. So the first value is my intercept, and then I want the first slope, and then I want the second slope. Again, note the negative sign. Good job! Part 2 Now the second part of our problem asks us, “Why is this equation best?” Well, as we just got done saying, we’ve got a high adjusted R-squared value and a low P-value; those are the main two determinants that we’re looking for. The other thing we look for is the number of variables, and though we’ve got more number of variables in this equation than we do in the other two, we’ve got a significantly higher adjusted R-square value that makes adding that extra variable So we want highest adjusted R-squared value, so looking at my answer options, it could be B, or it could be D. I want low P-value, so we’ve got low P-value here, and low P-value here, so that’s good. This says, for answer option B, “removing either predictor noticeably decreases the quality of the model.” And that’s true. If you take that second variable out, notice you get a ten percentage point drop in adjusted R-squared. So that’s a possibility, but let’s check D just to be sure. It says only a single predictor variable is in our equation, and we’ve noticeably got two. So it can’t be D; it has to be B. Fantastic! Part 3 And now the last part of our problem asks, “Is the best regression equation a good regression equation for predicting the nicotine content? Why or why not?” Well, here you want to be looking at your P-value. Here you’ve got the lowest P-value you could possibly have, which is zero. And so that’s going to tell us that, yes, this model will fit our data pretty well. So we want the answer option that says, “Yes.” So that’s going to be A or D. And A says, “Small P-value indicates good fit.” Answer option D says, “Large P-value indicates good fit.” Obviously that’s not true, so we want answer option A. Good job! And that's how we do it at Aspire Mountain Academy. Feel free to leave your comments below and let us know how good a job we did or how we can improve. And if your stats teacher is just boring or doesn't want to help you learn stats, then go to aspiremountainacademy.com, where you can learn more about accessing our lecture videos or provide feedback on what you’d like to see. Thanks for watching! We’ll see you in the next video! 0 Comments Howdy! I'm Professor Curtis of Aspire Mountain Academy here with more statistics homework help. Today we're going to learn how to interpret a frequency table. Here's our problem statement: Refer to the table summarizing service times (seconds) of dinners at a fast food restaurant. How many individuals are included in the summary? Is it possible to identify the exact values of all the original service times? Okay, so the first part of his problem is asking us for the total number of individuals included in the summary. We have our frequency table here, so to get the total number, we just add up the number that are in each of the different categories or classes. So to do that, I'm going to whip out my calculator and just add in the frequency counts for each of the categories together and give me the total that's in the summary. I put my answer here in the answer field. Excellent! Source DF SS MS Test Stat F Critical F P-value Download your free copy of the Stat 101 Nonlinear Regression Reference Sheet (which is used in the video) by clicking on the icon at right. Frustrated with a particular MyStatLab/MyMathLab homework problem? No worries! I'm Professor Curtis, and I'm here to help. Company Support © COPYRIGHT 2018-2020. ALL RIGHTS RESERVED. Stats About Contact Aspire Mountain Academy is a division The Company FAQ of Aspire Mountain Media LLC, Courses Terms of Use © COPYRIGHT 2018-2020. ALL RIGHTS RESERVED. Contact Aspire Mountain Academy is a division FAQ of Aspire Mountain Media LLC, Terms of Use
{"url":"https://www.aspiremountainacademy.com/homework-help/archives/11-2018","timestamp":"2024-11-01T18:54:03Z","content_type":"text/html","content_length":"117654","record_id":"<urn:uuid:57e7ed88-4833-4fd8-a24c-f25e3afe27ec>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00023.warc.gz"}
mixed integer conic quadratic optimization An elementary, but fundamental, operation in disjunctive programming is a basic step, which is the intersection of two disjunctions to form a new disjunction. Basic steps bring a disjunctive set in regular form closer to its disjunctive normal form and, in turn, produce relaxations that are at least as tight. An open question is: What … Read more Intersection Cuts for Mixed Integer Conic Quadratic Sets Balas introduced intersection cuts for mixed integer linear sets. Intersection cuts are given by closed form formulas and form an important class of cuts for solving mixed integer linear programs. In this paper we introduce an extension of intersection cuts to mixed integer conic quadratic sets. We identify the formula for the conic quadratic intersection … Read more
{"url":"https://optimization-online.org/tag/mixed-integer-conic-quadratic-optimization/","timestamp":"2024-11-11T09:44:44Z","content_type":"text/html","content_length":"85582","record_id":"<urn:uuid:72850f63-8c35-4871-b2d4-f36bbbf4856f>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00308.warc.gz"}
Equations of Motion in context of initial vertical velocity 30 Aug 2024 Journal of Theoretical Physics Volume 12, Issue 3, 2022 Equations of Motion with Initial Vertical Velocity: A Theoretical Framework This article presents a comprehensive treatment of the equations of motion for an object under the influence of gravity, taking into account an initial vertical velocity. We derive the parametric equations that describe the trajectory of the object and provide a detailed analysis of the resulting motion. The study of the motion of objects under the sole influence of gravity has been a cornerstone of classical mechanics since the time of Galileo and Newton. However, in many real-world scenarios, an initial vertical velocity is imparted to the object, which significantly alters its trajectory. In this article, we focus on developing a theoretical framework for understanding the equations of motion with an initial vertical velocity. Equations of Motion Let’s consider an object of mass $m$ that is projected vertically upwards with an initial velocity $v_0$. The acceleration due to gravity is denoted by $g$. We can describe the motion of the object using the following parametric equations: y(t) = v_0*t - (1/2)*g*t^2 x(t) = 0 Here, $y(t)$ represents the vertical displacement of the object at time $t$, and $x(t)$ is the horizontal displacement. To derive these equations, we start with the basic kinematic equation: $v_f = v_i + a*t$ where $v_f$ is the final velocity, $v_i$ is the initial velocity, $a$ is the acceleration, and $t$ is time. In this case, the initial velocity is $v_0$, and the acceleration due to gravity is $-g$. We can rewrite the equation as: $v(t) = v_0 - g*t$ The vertical displacement can be found by integrating the velocity with respect to time: $y(t) = \int (v_0 - g*t) dt$ Evaluating this integral, we get: y(t) = v_0*t - (1/2)*g*t^2 The horizontal displacement remains constant at $x=0$, as there is no acceleration in the x-direction. In conclusion, we have presented a theoretical framework for understanding the equations of motion with an initial vertical velocity. The parametric equations derived in this article provide a comprehensive description of the trajectory of an object under the influence of gravity and an initial vertical velocity. This work has implications for various fields, including physics, engineering, and astronomy. [1] Galileo, G. (1638). Dialogue Concerning the Two Chief World Systems. [2] Newton, I. (1687). Philosophiæ Naturalis Principia Mathematica. Note: The references provided are classic works in the field of classical mechanics that laid the foundation for our understanding of motion under gravity. Related articles for ‘initial vertical velocity’ : • Reading: Equations of Motion in context of initial vertical velocity Calculators for ‘initial vertical velocity’
{"url":"https://blog.truegeometry.com/tutorials/education/7a14218ade8f9f895d9d38d9166d861e/JSON_TO_ARTCL_Equations_of_Motion_in_context_of_initial_vertical_velocity.html","timestamp":"2024-11-06T08:17:02Z","content_type":"text/html","content_length":"23623","record_id":"<urn:uuid:bce85288-1252-4e8f-a370-693bd1b8635a>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00244.warc.gz"}
Multiplying Special Case Polynomials | sofatutor.com Multiplying Special Case Polynomials Basics on the topic Multiplying Special Case Polynomials Are you getting tripped up and slowed down by using the FOIL and area model methods to factor and find the products of polynomials? When working with polynomials, it’s important to keep an eye out for patterns that can help you solve problems quickly and accurately. Watch this video, and you just might learn some new tricks to help you work with special case polynomials. For the square of a binomial sum, be aware the product is a perfect square trinomial: (a+b)(a+b) = a² + 2ab + b². For the square of a binomial difference, the product is another perfect square trinomial: (a-b)(a-b) = a² -2ab + b². Notice when the terms in the binomials are added, the middle term in the product is positive, and when the terms in the binomials are subtracted, the middle term in the product is negative. The product of a binomial sum and a binomial difference is the difference of two squares or a DOTS: (a+b)(a-b) = a² – b². To identify a DOTS, look for perfect squares. Watching out for patterns can help you work with polynomials. If you can learn to recognize these three patterns, you can work smart rather than working so darn hard. Understand the relationship between zeros and factors of polynomials. Transcript Multiplying Special Case Polynomials GSquared and FoxyCooke are competing hackers. They're always trying to best each other. This time, they're trying to hack into a company’s security system. Whoever gets into the system first will be the winner of eternal fame and glory. GSquared is at the first security wall. To gain access to the first level, he has to simplify this expression, the sum of 'a' and 'b' squared, AKA the sum of 'a' and 'b' times the sum of 'a' and 'b'. The binomials GSquared knows a good strategy when he sees one. He uses an area model to find the product of the two binomials.To set up the area model, he divides a rectangle into 2 rows and 2 columns and labels each of the 4 sections with a term from the 2 binomials. Then he calculates the area of each section and writes it down in the corresponding section of the area model. (a)(a) = a², (a)(b) = ab, (b)(a) = ab, (b)(b) = b². Calculate the product of two binomials He adds the four terms together, groups the like terms and finally, writes the expression in the standard form. The result? a² + 2ab + b². He knows he's got this, so he enters the password…Gosh darn it. That FoxyCooke got here first. GSquared was too slow. Does Foxy know a faster way to calculate the product of two binomials? She does know a faster way… Simplify the expression When Foxy saw the expression, she immediately recognized the pattern. She knows the square of a binomial sum is equal to a perfect square trinomial! GSquared reaches the wall of the 2nd level. Hoping to be faster than last time, he tries a different method to simplify the expression. He uses the FOIL method to simplify the difference of a and b times the difference of a and b. The FOIL method FOIL stands for first, outside, inside, and last. This is a mnemonic device used to simplify the product of two binomials. Let's try it out, but we have to go fast! First, multiply the first two terms, (a)(a) = a². Next, multiply the outer terms, (a)(-b) = -ab. Now the inside terms, (-b)(a). The product is '-ab'. Last, the last terms, (-b)(-b) = b². The perfect square trinomial Combine the like terms and... the simplified expression is a² - 2ab + b². He enters the password and...Although he used the FOIL method, he's foiled again by Foxy – she beat him a second time. How did she do it? Again Foxy recognized a pattern. This expression is the square of a binomial difference and she used another perfect square trinomial to crack the password in just seconds. Multiplication problem Finally the last wall... This time the task is a multiplication problem, 42 times 38. GSquared is a pro at multiplication, so he's not worried... Are you kidding me? That Foxy is a real fast fox. Look how she used the difference of two squares to solve the problem fast like lightning. When you have the expression of the sum of a and b times the difference of a and b, watch what happens when you apply the FOIL method the inner and outer products cancel each other out to leave the difference of two squares! The DOTS expression This is also known as a DOTS expression That Foxy, she applied DOTS to the number 42 and 38 rewriting them as 40 plus 2 and 40 minus to get a difference of two squares, then she simplified the exponents and calculated the difference to equal 1,596! She recognized the pattern of a difference of two squares! Pay attention and you'll start to notice these patterned expressions too, just like Foxy. This summary Foxy used is super helpful to help you learn to recognize the polynomial patterns, so you can save time and be Foxy-fast. Foxy's the clear winner, for sure , but hold on... What’s going on here? It looks as though our two hackers got hacked…by their mom! It`s time for dinner. Multiplying Special Case Polynomials exercise Would you like to apply the knowledge you’ve learned? You can review and practice it with the tasks for the video Multiplying Special Case Polynomials. • Calculate the term ${(a+b)^2}$. Imagine you cut the square with side lengths ${a+b}$ into two squares and two rectangles. The area of the original square is the same as the sum of the two smaller squares and two rectangles. You can also use the FOIL method to simplify ${(a+b)(a+b)}$. The expression ${(a+b)(a+b)}$, or ${ (a + b)^2}$, can be simplified using the area model. Imagine a square with side lengths ${a+b}$; its area is ${(a+b)^2}$. You can divide this square into two smaller squares with sides $ a$ and $b$, respectively, as well as two rectangles with a height $a$ and length $b$. The areas of the squares are ${a^2}$ and ${b^2}$, while each rectangle has an area equal to ${ ab}$. Combining the like terms gives us ${(a+b)^2=a^2+2ab+b^2}$. • Summarize the multiplication of special binomials. The difference of two squares is ${(a+b)(a-b)}$. You can simplify the expansion of binomials by using the FOIL method. For ${ (a+b)(a+b)}$, use the FOIL method: F multiply the first ${a \times a}$ O multiply the outer ${a \times b}$ I multiply the inner ${b \times a}$ L multiply the last ${b \times b}$ How can you remember the pattern for multiplying of binomials? Perfect Square Binomials ${ \begin{array}{rcl} (a + b)(a + b) &=&\\ &=& a^2 + ab + ab + b^2\\ &=& a^2 + 2ab + b^2 \end{array}}$ ${\begin{array}{rcl} (a - b)(a - b) &=&\\ &=& a^2 - ab - ab + b^2\\ &=& a^2 - 2ab + b^2 \end{array}}$ Difference of Two Squares ${\begin{array}{rcl} (a + b)(a - b) &=&\\ &=& a^2 - ab + ab + b^2\\ &=& a^2-b^2 \end{array}}$ • Use the area model to simplify the expression. ${ \begin{array}{rcl} (4y)^2 &=&\\ 4^2 \times y^2 &=& 16y^2 \end{array}}$ Draw the square and write the areas in the corresponding squares or rectangles. If you have a perfect square binomial such as $(x + a)^2$, you can approach this in two ways: 1.$~$Using FOIL ${\begin{array}{rcl} (x + a)^2 &=&\\ &=& (x + a)(x + a)\\ &=& x^2 + ax + ax + a^2\\ &=& x^2 + 2ax + a^2\\ ~\\ \end{array}}$ 2.$~$Using the FOIL shortcut for Perfect Square Binomials ${ \begin{array}{rcl} ~\\ (x + a)^2 &=&\\ &=& x^2 + 2(ax) + a^2\\ \end{array}}$ ${ \begin{array}{rcl} (x + 5)^2&=&\\ &=& x^2 + 2 \times 5x + 5^2\\ &=& x^2 + 10x + 25 \end{array}}$ ${ \begin{array}{rcl} (3 + 3x)^2&=&\\ &=&3^2 + 2 \times 3 \times 3x + (3x)^2\\ &=& 9x^2 + 18x + 9 \end{array}}$ ${ \begin{array}{rcl} (1 + 4y)^2&=&\\ &=&1^2 + 2 \times 1 \times 4y + (4y)^2\\ &=& 16y^2 + 8y + 1 \end{array}}$ ${ \begin{array}{rcl} (2x + 4)^2&=&\\ &=&(2x)^2 + 2 \times 2x \times 4 + 4^2\\ &=& 4x^2+16x+16 \end{array}}$ • Explain the FOIL method to Jack. Here you see an example for $(2x+4)^2$ using the area model. This corresponds to the FOIL method. $(2x + 4)^2 = (2x)^2 + 2x(4) + 4(2x) + 4^2$ Remember, you have to apply the exponent to every term in the parentheses. ${ \begin{array}{rcl} (3y)^2 &=&\\ &=& 3y(3y)\\ &=& 9y^2 \end{array}}$ Look at this example for the FOIL method: ${ \begin{array}{rcl} (2a - b) &=&\\ &=& 2a(2a) + 2a(-b) + -b(2a) + -b(-b)\\ &=& (2a)^2 - 4ab + b^2 \end{array}}$ We can expand the given perfect square binomial, $(5x-2y)^2$, using the FOIL method. First, we transform the expression: ${ (5x-2y)^2=(5x-2y)(5x-2y)}$ Let's start. F multiply the first ${\large 5x(5x) = 25x^2}$ O multiply the outer ${\large 5x(-2y) = -10xy}$ I multiply the inner ${\large -2y(5x) = -10xy}$ L multiply the last ${\large -2y(-2y) = 4y^2}$ We still have to combine like terms. ${(5x - 2y)^2 = 25x^2 - 20xy + 4y^2}$ Party on, Jack! • Explain the FOIL method. You can use the distributive property for ${ a(a-b)=a^2-ab}$ and for ${ -b(a-b)=-ab+b^2}$. Here is another example. We can simplify ${ (a - b)^2 = (a - b)(a - b)}$ using FOIL multiplication. For ${ (a + b)(a + b)}$, use the FOIL method: F multiply the first ${ a \times a}$ O multiply the outer ${ a \times -b}$ I multiply the inner ${ -b \times a}$ L multiply the last ${ -b \times b}$ For the final step, we add all the resulting terms: ${ a^2 - ab - ab + b^2 = a^2 - 2ab + b^2}$ • Simplify the following expressions. Use the formulas above and think about what to assign to $a$ as well as to $b$. For example: $(2x-4)$. You use the second formula with $a = 2x$ and b = 4$. You can use those formulas also for calculating products of numbers ${ 22 \times 18 = (20 + 2)(20 - 2)}$. Now we can plug in ${ a = 20}$ and ${ b = 2}$ in our third formula. ${ \begin{array}{rcl} 22 \times 18 &=& (a + b)(a - b)\\ &=& a^2 - b^2\\ &=& 20^2 - 2^2\\ &=& 400 - 4\\ &=& 396 \end{array}}$ These formulas show a pattern to simplify the multiplication of special binomials. Now, we'd like to practice using these formulas by substituting ${a}$ and ${ b}$ in the expressions: First expression: $(12 - 3z)^2$ When ${ a = 12}$ and ${ b = 3z}$, we can use the second equation, ${(a + b)^2}$. ${ \begin{array}{rcl} (12 - 3z)^2 &=&\\ &=& 12^2 - 2(12)(3z) + (3z)^2\\ &=& 144 - 72z + 9z^2 \end{array}}$ Second expression: $96 \times 104$ We should recognize that we can use the third formula: ${ \begin{array}{rcl} 96 \times 104 &=& (a + b)(a - b) \end{array}}$ Substituting $a = 100$ and $b = 4$, we get: ${ \begin{array}{rcl} &=& (100 - 4)(100 + 4)\\ &=& (100 + 4)(100 - 4)\\ &=& 100^2 - 4^2\\ &=& 10000 - 16\\ &=& 9984 \end{array}}$ Third expression: $(1 + 9z)^2$ Using the second formula with $a = 1$ and $b = 9z$, we get: ${ \begin{array}{rcl} (1 + 9z)^2 &=&\\ &=& 1^2 + 2(1)(9z) + (9z)^2\\ &=& 1 + 18z + 81z^2 \end{array}}$ Fourth expression: $(-3z - 4)^2$ Given the expression $(-3y - 4)^2$, we can use the first formula, but we have to remember to pay attention to the signs. Using $a = -3y$ and $b = -4$, we get: ${ \begin{array}{rcl} (-3z - 4)^2 &=&\\ &=& (-3z)^2 + 2(-3z)(-4) + (-4)^2\\ &=& 9z^2 - 24z + 16 \end{array}}$
{"url":"https://us.sofatutor.com/math/videos/multiplying-special-case-polynomials","timestamp":"2024-11-11T08:22:38Z","content_type":"text/html","content_length":"158849","record_id":"<urn:uuid:5556b3e2-b548-4f13-8eab-a0b9fb43e04f>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00742.warc.gz"}
Discrete Time Mean Reversion Models | dwightgrantvaluation top of page Discrete Time Mean Reversion Models I recently valued a complex derivative where WTI oil was the underlying price. The valuation required a Monte Carlo simulation. When the valuation was reviewed by a major accounting firm, the values produced by their specialists deviated from mine more than expected. After a cooperative exchange, I identified the primary source of the difference. The reviewers’ model produced average future spot prices that were not equal to the market forward price. This violates the “no-arbitrage” requirement of pricing models. The source of the error was the omission of “drift-adjustment” terms (DATs). The application of Ito’s lemma gives rise to the DAT in mean-reversion models, just as it does in geometric Brownian motion models. This has been recognized in the continuous-time literature on mean-reversion models.[1] However, I wanted to refer the reviewers to a published reference that would explain how to calculate the DATs required in a discrete time model. If such a reference exists, I was not able to find it. Therefore, I prepared this note. After completing the derivation of DATs for a Monte Carlo simulation, I wondered how these results would apply to building a lattice of prices for a mean-reverting process. I reviewed Hull’s[2] presentation of the building of a trinomial lattice of prices for a mean-reverting process. His method involves a search process to identify the DATs. The derivation of the DATs for the Monte Carlo implementation provides the basis for the analytical calculation of the DATs required by lattices. I use Hull’s example to demonstrate how to build a lattice without the search process. For details please see the PDF file. 1. Eduardo S. Schwartz. “The Stochastic Behavior of Commodity Prices: Implications for Valuation and Hedging”, Journal of Finance, Vol. 52, No. 3, p. 926. (I want to thank Andrew Lyasoff for suggesting this reference.) 2. John C. Hull. Options Futures and Other Derivatives, 8th Edition, 2012, Prentice-Hall, New York, N. Y. bottom of page
{"url":"https://www.dwightgrantconsulting.com/blank","timestamp":"2024-11-12T09:34:34Z","content_type":"text/html","content_length":"289897","record_id":"<urn:uuid:8286926a-197b-4789-b731-34a73b31fea9>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00306.warc.gz"}
How can I improve my problem-solving skills for the GED Math exam? | Hire Someone To Take My GED Exam How can I improve my problem-solving skills for the GED Math exam? This is a question to know about. Most exam questions are written after prerequisites (which isn’t always true often), and a good cheat-sheet is a tool in their favour. However these have varied a lot of parameters. The question is most crucial, so the solution I am thinking of is to use some cheat-sheets which represent the various options of many options available. In the cheat sheets I use, you can choose a certain parameter (of your choice) in order to choose the most correct answer, that is not a result of a mistake. I am slightly confused by the reason for this, and it is stated that it is not the solution itself that “works”. Especially if you just want to use it, it’s time to take the exam to have more ideas than the other answers in question 1. How can I improve my problem-solving skills for the GED Math exam? Hello! Would you mind me pointing out the questions that I have to show in my website? My link is already posted, but please leave me a reference on my site, otherwise I can ask for a few more questions. There are a few more links if you prefer. This post could get a lot of time through the test, so be sure to check it out! The good news for those who can definitely find something to do seems to be that the HWA is very hard! The test is very well organised, as is most of the questions. How about the O/S? How about the Q- and Y-list? How about various other options and their results? What about most other questions that I cannot find everything?! When I get up at 6am at Sunday morning, I have to go into school. I am a very big party. I pay for the gym. I am looking for something to eat, and it turns out that 1 day of school and then another day about my daycare. So, I was thinking of a different kind of school, but I am not sure about that. I need help in helping here. I will try to describe them in details later so you will be able to pay me nicely. So, please post something which includes suggestions on how to solve the problem. Thank you! When I start my first position at the end of school, I begin to feel a lot of pressure and need to sit down before I even think about the idea. I know today that I should stand there and I will need to sit down just six to get comfortable a little. Take Online Classes For You This causes me to have to spend a long time sitting with a book on my lap, or a blanket on top of it, in between work and home. The first thing that I want to do is sit and look at my hand in the kitchen table since I am playing with my mum which contains a book which will allow me to play with it. I really want to be able to know the meaning of the words that my mum speaks which are spoken by some people. So, I can simply say my mum means it in a very literal way. I would then move my hand so that it should look like it means “I will find out about school soon,” and is “I can find out my name”, if I am close to the middle of the sentence at all. I would then say the title “FamousHow can I improve my problem-solving skills for the GED Math exam? I knew from looking around that you have an unlimited amount of learning challenges. In fact, some of my best teachers are on a little over a year to four levels in High School Life. They do not give me some time to focus on the difficult and interesting problems in their classes. This is because they do not give me time to make some of my problems from the fact that I am so organized in my writing and my research task. In most GED Maths, I have prepared my students from that age I know how to write and experiment with new ideas but I suspect that they will eventually cut my work and fall short. How can we get any more talented teachers to be focused on the subject seriously? I hope to have a look into these problems and I can help you achieve your task. Here are a few links to help you work out your exact problem: That’s an excellent comment for all the subjects below: What’s the difference between the GED Math Test Questions (GED-3) and GED-4, the main issues in the GED Math exam for English teachers? GED-3: The main things you must understand about GED-3 – 1) Your exam is highly complex – what you need to know right now – and what you are supposed to be doing. – With your answer What you have to understand about GED-3 – 2) Your exam is not easy-to-understand – what you need to understand, in general. – This is your dilemma. – In the later GED-3, your answers will ask about visit our website or less the things you are really sure about the exam but it is important that you are trying to understand and have a sense of what’s coming next. – With your answer What you have to understand about GED-4 – 3) The main things you are supposed to be doing right now – what your questions are and what they are What you are supposed to be doing right now – How is this done? (GED-4) – This is how you do a GED-4: – How good is your exam (GED-4.7) – How great is your exam (GED-4.8) – How poor is your exam (GED-4.11) GED-4.7: The simple truth of the word “great” is right here and now. Take My Spanish Class Online You go through a rigorous set of tasks (GED-4.9) until you have taken a deep interest in your exam like most GED teachers handle it. Your homework could easily boil down to this… GED-4.9: Your exam is like a science test. You are not going to study everything there. In GED-4.8 you will just hold your test with some simple tests (one test over another) and read it a couple times. (You will be admitted to the GED-4.9 test. Here is the list of things you should do as you try to read, trying to figure out your problem from a test card. Even if you don’t have your own scorecard yet (which is my favorite test) you will answer better than in GED-3… GED-4.8: You study the questions and questions around you like there is bigger world (GED-How can I improve my problem-solving skills for the GED Math exam? If you’re familiar with Maths quizzes and English Grammar, chances are we know your answer right off the bat. So, how can you improve your calculus knowledge? Since I recently taught this to everyone in college (my first online class!), I’d rather start by having some simple answers first. The trick is to make your answers as simple as possible without calling some of them wrong. Hire Someone To Do Online Class If you have as many answers (7-10) as you want, you probably want to add up the best answers. For example, you might want to add 4-9 into your answer to check if you had a very bad answer to get 4-6, on the last 6 correct answers. You don’t want to do that right in the first place, and you could improve by adding answers. But, for the rest of the class, you would probably want any number of answers that were most popular at that point. The trick is to use more maths-related questions, for example, mathematics puzzles that take some thought or knowledge into account. I’ll start by illustrating some of those math puzzles. Let’s say you have a 5-8 puzzle that gives you a 1 in the science game table. You know that the answer is 5, but you don’t want it to be, so the table may be inverted: Ligamenta che il giorno che teva questa cosa Here’s the math puzzle: Given the data block given: Your answer was 5 Your problem was trying to solve 10 Your problem was looking at the equations given later in the equation table Your car was found in the middle of a blue sky Your bag had a broken spoon Your calculator was guessing and multiplying the number on the calculator board The second question asks to find out if any points that have a value less than zero (sophisticated) are nearby? Three different levels of math, math-shifted and math-seventy turned out to be the most difficult, if not the most common, problem of each level we’ve considered. So, after removing the math to the last level, the way to solve the problem is through six points when you set the puzzle to five. This rule was one of those aspects of Maths that is almost non-trivial (although it should be fairly easy to find down a search you already do in my day). The other example is that I moved the order of the math questions on the table below to the right of the numbers you’ve pointed out, along with the first few rows in the order. Next, I need to find the solution to that calculation, which is 3 3/144 + 270 45 + 10862 + 1 That result doesn’t look good in my eyes, but it looks good in your mind. If I wanted to fill the room with calculations in a lower order, the problem should be simple. So, that’s it for why I was very choosy. Now I can solve a mathematical puzzle in three different places in what appears to be an easy way to learn Math and solve it! The hardest part of sorting a number is the number itself, as each 5-8 cube in that code has a 2-measure
{"url":"https://gedexamhire.com/how-can-i-improve-my-problem-solving-skills-for-the-ged-math-exam","timestamp":"2024-11-08T05:59:04Z","content_type":"text/html","content_length":"165095","record_id":"<urn:uuid:2917f68e-7ad7-43eb-9654-f7a779947ab7>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00720.warc.gz"}
Nonparametric Correlation Save Options • Genstat Knowledge Base 2024 Use this to save results from Spearman’s rank correlation or Kendall’s rank correlation coefficient. 1. After selecting the appropriate boxes, type names for the data structures into the corresponding In: fields. The tables below indicate the type of structures formed for each item. Spearman’s rank correlation Rank Correlation coefficients Scalar or symmetric matrix Saves the correlation coefficient for each pair of samples. Student’s t approximation Scalar or symmetric matrix Saves the Student’s t approximation to the correlation coefficient for each pair of samples. Degrees of freedom Scalar or symmetric matrix Saves the degrees of freedom for each t statistic. Ranks Pointer Saves the ranks of each sample. Kendall’s rank correlation coefficient Rank correlation coefficients Scalar or symmetric matrix Saves the correlation coefficient for each pair of samples. Probabilities Scalar or symmetric matrix Saves the probability to the correlation coefficient for each pair of samples. Normal approximation Scalar or symmetric matrix Saves the Normal approximation to the correlation coefficient for each pair of samples. Display in spreadsheet Select this to display the results in a new spreadsheet window. See also
{"url":"https://genstat.kb.vsni.co.uk/knowledge-base/npcorrel-save/","timestamp":"2024-11-06T09:07:43Z","content_type":"text/html","content_length":"40272","record_id":"<urn:uuid:d0880494-f282-42af-b254-775772b23c6f>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00353.warc.gz"}
Depth Fade Is it possible to control the depth fade if used as an alpha in a lerp via camera vector rather than its initial default. I have tried and failed or I am not implementing it correctly. Any advice would be muchly appreciated. My material itself looks fine from the ground stationary but moving certain ways makes the depth fade move to unwanted results. Ive tried using a scene depth and pixel depth in the material as well. Not sure if I am getting closer to the result i want. I think I know what you are talking about. I had this problem working on the imulsion material for gears 1. I was using depth fade as the texture coordinate of a gradient texture for where the pool intersected the ground. In order to make the gradient wider, I made the slope of geometry under the translucency fairly flat. Initially, the thickness of the band fluctuated way too much based on viewing angle. At first I considered it an artifact but it is just due to how the ray of intersection travels much further at glancing angles when the two planes in question are parallel. I refer to the correction below as “perspective corrected depth fade” as you will be counteracting that long ray at glancing angles. Basically you do a dot product of the camera vector and vertex normal, and either clamp or constantbiasscale that into the 0-1 range and use it to lerp between two different depths, each a scalar parameter. You will have to fiddle with the numbers a bit as this isn’t perfect.
{"url":"https://forums.unrealengine.com/t/depth-fade/288178","timestamp":"2024-11-08T18:22:48Z","content_type":"text/html","content_length":"21721","record_id":"<urn:uuid:eec520e7-fd60-4716-b12a-1e4a6a679825>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00327.warc.gz"}
The effect of alternative summary statistics for communicating risk reduction on decisions about taking statins: a randomized trial Background: While different ways of presenting treatment effects can affect health care decisions, little is known about which presentations best help people make decisions consistent with their own values. We compared six summary statistics for communicating coronary heart disease (CHD) risk reduction with statins: relative risk reduction and five absolute summary measures-absolute risk reduction, number needed to treat, event rates, tablets needed to take, and natural frequencies. Methods and Findings: We conducted a randomized trial to determine which presentation resulted in choices most consistent with participants' values. We recruited adult volunteers who participated through an interactive Web site. Participants rated the relative importance of outcomes using visual analogue scales (VAS). We then randomized participants to one of the six summary statistics and asked them to choose whether to take statins based on this information. We calculated a relative importance score (RIS) by subtracting the VAS scores for the downsides of taking statins from the VAS score for CHD. We used logistic regression to determine the association between participants' RIS and their choice. 2,978 participants completed the study. Relative risk reduction resulted in a 21% higher probability of choosing to take statins over all values of RIS compared to the absolute summary statistics. This corresponds to a number needed to treat (NNT) of 5; i.e., for every five participants shown the relative risk reduction one additional participant chose to take statins, compared to the other summary statistics. There were no significant differences among the absolute summary statistics in the association between RIS and participants' decisions whether to take statins. Natural frequencies were best understood (86% reported they understood them well or very well), and participants were most satisfied with this information. Conclusions: Presenting the benefits of taking statins as a relative risk reduction increases the likelihood of people accepting treatment compared to presenting absolute summary statistics, independent of the relative importance they attach to the consequences. Natural frequencies may be the most suitable summary statistic for presenting treatment effects, based on self-reported preference, understanding of and satisfaction with the information, and confidence in the decision. Dive into the research topics of 'The effect of alternative summary statistics for communicating risk reduction on decisions about taking statins: a randomized trial'. Together they form a unique
{"url":"https://discovery.dundee.ac.uk/en/publications/the-effect-of-alternative-summary-statistics-for-communicating-ri","timestamp":"2024-11-09T14:14:06Z","content_type":"text/html","content_length":"62339","record_id":"<urn:uuid:7552ed9d-81a9-4e6a-a37b-568938fffb14>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00433.warc.gz"}