content
stringlengths
86
994k
meta
stringlengths
288
619
Techniques of Classification and Clustering Title: Techniques of Classification and Clustering 1 Techniques of Classification and Clustering Problem Description • Assume • AA1, A2, , Ad (ordered or unordered) domain • S A1 ? A2 ? Ad d-dimensional (numerical or non-numerical) space • Input • Vv1, v2, , vm d-dimensional points, where vi ?vi1, vi2, , vid?. • The jth component of vi is drawn from domain Aj. • Output • Gg1, g2, , gk a set groups of V with label vL, where gi ? V. • Supervised classification • Discriminant analysis, or simply Classification • A collection of labeled (pre-classified) patterns are provided • Aims to label a newly encountered, yet unlabeled (training) patterns • Unsupervised classification • Clustering • Aims to group a given collection of unlabeled patterns into meaningful clusters • Category labels are data driven Methods for Classification • Neural Nets • Classification functions are obtained by passing multiple passes over training sets • Poor generation efficiency • Not efficient handling of non-numerical data • Decision trees • If E contains only objects of one group, the decision tree is just a leaf labeled with that • Construct a DT that correctly classifies objects in the training data set. • Test to classify the unseen objects in the test data set. Decision Trees (Ex Credit Analysis) salary lt 20000 education in graduate Decision Trees • Pros • Fast execution time • Generated rules are easy to interpret by humans • Scale well for large data sets • Can handle high dimensional data • Cons • Cannot capture correlations among attributes • Consider only axis-parallel cuts Decision Tree Algorithms • Classifiers from machine learning community • ID3J. R. Quinlan, Induction of decision trees, Machine Learning, 1, 1986. • C4.5J. Ross Quinlan, C4.5 Programs for and Neural Networks, Cambridge University Press, Cambridge, 1996. Machine Learning, Morgan Kaufman, 1993 • CARTL. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone, Classification and Regression Trees, Wadsworth, Belmont, 1984. • Classifiers for large database • SLIQMAR96, SPRINTJohn Shafer, Rakesh Agrawal, and Manish Mehta, SPRINT A scalable parallel classifier for data mining, the VLDB Conference, Bombay, India, September 1996. • SONARTakeshi Fukuda, Yasuhiko Morimoto, and Shinichi Morishita, Constructing efficient decision trees by using optimized numeric association rules, the VLDB Conference, Bombay, India, 1996. • RainforestJ. Gehrke, R. Ramakrishnan, V. Ganti, RainForest A Framework for Fast Decision Tree Construction of Large Datasets, Proc. of VLDB Conf., 1998. • Pruning phase followed by building phase Decision Tree Algorithms • Building phase • Recursively split nodes using best splitting attribute for node • Pruning phase • Smaller imperfect decision tree generally achieves better accuracy • Prune leaf nodes recursively to prevent • Theoretic Background • Entropy • Similarity measures • Advanced terms Information Theory Concepts • Entropy of a random variable X with probability distribution p(x) • The Kullback-Leibler(KL) Divergence or Relative Entropy between two probability distributions p and q • Mutual Information between random variables X and What is Entropy • S is a sample of training data set • Entropy measures the impurity of S • H(X)The entropy of X • If H(X)0, it means X is one value As H() increases, X are heterogeneous values. • For the same number of X values, • Low Entropy means X is from a uniform (boring) distribution A histogram of the frequency distribution of values of X would be flat ?and so the values sampled from it would be all over the place • High Entropy means X is from varied (peaks and valleys) distribution A histogram of the frequency distribution of values of X would have many lows and one or two highs ? and so the values sampled from it would be more predictable. Entropy-Based Data Segmentation T. Fukuda, Y. Morimoto, S. Morishita, T. Tokuyama, Constructing Efficient Decision Trees by Using Optimized Numeric Association Rules, Proc. of VLDB Conf., 1996. • Attribute has three categories, 40 C1, 30 C2, 30 C1 C2 C3 S1 C1 C2 C3 S2 C1 C2 C3 S3 C1 C2 C3 S4 C1 C2 C3 Information Theoretic Measure R. Rgrawal, S. Ghosh, T. Imielinski, B. Iyer, A. Swami, An Interval Classifier for Database Mining Applications, Proc. ofVLDB, 1992. • Information gain by branching on Ai • gain(Ai) E - Ei • The entropy E of an object set • the object set containing • object ek of group Gk. • The expected entropy for • the tree with Ai as the root • where Eij is the expected entropy for the subtree of an object set. • Information content of • the value of Ai C1 C2 C3 S1 C1 C2 C3 S2 C1 C2 C3 S3 C1 C2 C3 S4 C1 C2 C3 S5 C1 C2 C3 gain1E-E10.015 gain2E-E21.09 Distributional Similarity Measures • Cosine • Jaccard coefficient • Dice coefficient • Overlap coefficient • L1 distance (City block distance) • Euclidean distance (L2 distance) • Hellinger distance • Information Radius (Jensen-Shannon divergence) • Skew divergence • Confusion Probability • Lins Similarity Measure Similarity Measures • Minkowski distance • Euclidean distance • p2 • Manhattan distance • p1 • Mahalanobis distance • Normalization due to weight schemes • ? is the sample covariance matrix of the patterns or the known covariance matrix of the pattern generation process General form • I (common (A,B)) information content associated with the statement describing what A and B have in common • I (description (A,B)) information content associated with the statement describing A and B • ?(s) probability of the statement within the world of the objects in question, i.e., fraction of objects exhibiting feature s. IT-Sim (A,B) Similarity Measures • The Set/Bag Model Let X and Y be two collections of XML documents • Jaccards Coefficient • Dices Coefficient Similarity Measures • Cosine-Similarity Measure (CSM) • The Vector-Space Model Cosine-Similarity Measure Query Processing a single cosine • For every term i, with each doc j, store term frequency tfij. • Some tradeoffs on whether to store term count, term weight, or weighted by idfi. • At query time, accumulate component-wise sum • If youre indexing 5 billion documents (web search) an array of accumulators is infeasible Similarity Measures (2) • The Generalized Cosine-Similarity Measure (GCSM) Let X and Y be vectors and • where • Hierarchical Model • Why only for depth? 2 Dim Similarities • Cosine Measure • Hellinger Measure • Tanimoto Measure • Clarity Measure Advanced Terms • Conditional Entropy • Information Gain Specific Conditional Entropy • H(YXv) • Suppose Im trying to predict output Y and I have input X • XCollege Major, Y likes Gladiator • Lets assume this reflects the true probabilities X Y Math Yes History No CS Yes Math No Math No CS Yes History No Math Yes • From this data we estimate • P(LikeGYes)0.5 • P(MajorMath LikeGNo) 0.25 • P(MajorMath)0.5 • P(LikeGYes MajorHisgory)0 • Note • -H(X)1.5 -H(Y)1 • ---- • H(YXMath)1 H(YXHistory)0 • H(YXCS)0 Conditional Entropy • Definition of Conditional Entropy • H(YX)The average specific conditional entropy of Y • If you choose a record at random what will be the conditional entropy of Y, conditioned on that rows value of X • Expected number of bits to transmit Y if both sides will know the value of X X Y Math Yes History No CS Yes Math No Math No CS Yes History No Math Yes vj Prob(Xvj) H(YXvj) Math 0.5 1 History 0.25 0 CS 0.25 0 Information Gain • Definition of Information Gain • IG(YX) I must transmit Y. How many bits on average would it save me if both ends the line knew X? • IG(YX) H(Y) H(YX) X Y Math Yes History No CS Yes Math No Math No CS Yes History No Math Yes H(Y) 1 H(YX) 0.510.2500.2500.5 Thus, IG(YX) 1-0.5 0.5 Relative Information Gain • Definition of Relative Information Gain • RIG(YX) I must transmit Y, what fraction of the bits on average would it save me if both ends the line knew X? • RIG(YX) H(Y) H(YX)/H(Y) X Y Math Yes History No CS Yes Math No Math No CS Yes History No Math Yes H(Y) 1 H(YX) 0.510.2500.2500.5 Thus, IG(YX) (1-0.5)/1 0.5 What is Information Gain used for? • Suppose you are trying to predict whether someone is going to live past 80 years. From historical data you might find • IG(LongLife HairColor) 0.01 • IG(LongLife Smoker) 0.2 • IG(LongLife Gender) 0.25 • IG(LongLife LastDigitOfSSN) 0.00001 • IG tells you how interesting 1 2-d contingency table is going to be. • Given • Data points and number of desired clusters K • Group the data points into K clusters • Data points within clusters are more similar than across clusters • Sample applications • Customer segmentation • Market basket customer analysis • Attached mailing in direct marketing • Clustering companies with similar growth A Clustering Example Income High Children1 CarLuxury Income Medium Children2 CarTruck Cluster 1 Car Sedan and Children3 Income Medium Income Low Children0 CarCompact Cluster 4 Cluster 3 Cluster 2 Different ways of representing clusters Clustering Methods • Partitioning • Given a set of objects and a clustering criterion, partitional clustering obtains a partition of the objects into clusters such that the objects in a cluster are more similar to each other than to objects in different clusters. • K-means, and K-mediod methods determine K cluster representatives and assign each object to the cluster with its representative closest to the object such that the sum of the distances squared between the objects and their representatives is • Hierarchical • Nested sequence of partitions. • Agglomerative starts by placing each object in its own cluster and then merge these atomic clusters into larger and larger clusters until all objects are in a single cluster. • Divisive starts with all objects in cluster and subdividing into smaller pieces. • k-Means • Fuzzy C-Means Clustering • Hierarchical Clustering • Probabilistic Clustering Similarity Measures (2) • Mutual Neighbor Distance (MND) • MND(xi, xj) NN(xi, xj)NN(xj, xi), where NN(xi, xj) is the neighbor number xj with respect to xi. • Distance under context • s(xi, xj)f(xi, xj, e), where e is the context K-Means Clustering Algorithm • Choose k cluster centers to coincide with k randomly-chosen patterns • Assign each pattern to its closest cluster • Recompute the cluster centers using the current cluster memberships. • If a convergence criterion is not met, go to step • Typical convergence criteria • No (or minimal) reassignment of patterns to new cluster centers, or minimal decrease in squared Objective Function • k-Means algorithm aims at minimizing the following objective function (square error K-Means Algorithm (Ex) • Given a clustering ?, we denote by ?(x) the centroid this clustering associates with an arbitrary point x. A measure of quality for ? • Distortion? ?x d2(x, ?(x))/R • Where R is the total number of points and x ranges over all input points. • Improvement • Distortion ?( parameters) log R • Distortion ? mk log R • The way to initialize the means is the problem. One popular way to start is to randomly choose k of the samples • The results produced depend on the initial values for the means • It can happen that the set of samples closest to mi is empty, so the mi cannot be updated. • The results depend on the metric used to measure Related Work Clustering • Graph-based clustering • For an XML document collection C, s-Graph sg (C) (N, E), a directed graph such that N is the set of all the elements and attributes in the documents in C and (a, b) ? E if and only if a is a parent element of b in document(s) in C (b can be element or attribute). • For two sets, C1 and C2, of XML documents, the distance between them, where sg(Ci) is the number of edges Fuzzy C-Means Clustering • FCM is a method of clustering which allows one piece of data to belong to two or more clusters. • Fuzzy partitioning is carried out through an iterative optimization of the objective function shown above, with the update of membership u and the cluster center c by • The iteration stop when , where ? is a termination criterion between 0 and 1, whereas k are the iteration steps. This procedure converges to a local minimum or a saddle point of Jm. Fuzzy Clustering • Properties • uij ? 0,1 for all i,j • for all i • for all N • Correlation between m and ? • More iteration k for less ?. Hierarchical Clustering • Basic Process • Start by assigning each item to a cluster. N clusters for N items. (Let the distances between the clusters the same as the distances between the items they contain.) • Find the closest (most similar) pair of clusters and merge them into a single cluster. • Compute distances between the new cluster and each of the old clusters. • Repeat steps 2 and 3 until all items are clustered intoa single cluster of size N. Hierarchical Clustering (Ex) Hierarchical Clustering Algorithms • Single-linkage clustering • The distance between two clusters is the minimum of the distances between all pairs of patterns drawn from the two clusters (one pattern from the first cluster, the other from the second). • Complete-linkage clustering • The distance between two clusters is the maximum of the distances between all pairs of patterns drawn from the two clusters • Average-linkage clustering • Minimum-variance algorithm Single-/Complete-Link Clustering Single Linkage Hierarchical Cluster • Steps • Begin with the disjoint clustering having level L(k)0 and sequence number m0. • Find the least dissimilar pair of clusters in the current clustering, d(r),(s) min d(i),(j), where the minimum is over all pairs of clusters in the current clustering. • Increment the sequence number mm1. Merge clusters (r) and (s) into a single cluster to form the next clustering m. Set L(m) • Update the proximity matrix, D, by deleting the rows and columns corresponding to clusters (r) and (s) and adding a row and column corresponding to the newly formed cluster. The proximity between the new cluster, denoted (r,s) and old cluster (k) is defined d(k),(r,s) min (d(s),(r), d(k),(s)). • If all objects are in one cluster, stop. Else go to step 2. Ex Single-Linkage Agglomerative Hierarchical Clustering ALGORITHM Agglomerative Hierarchical Clustering INPUT bit-vectors B in bitmap index BI OUTPUT a tree T METHOD (1) Place each bit-vector Bi in its cluster (singleton), creating the list of clusters L (initially, the leaves of T) LB1, B2, , Bn. (2) Compute a merging cost function, between every pair of elements in L to find the two closest clusters Bi,Bj which will be the cheapest couple to merge. (3) Remove Bi and Bj from L. (4) Merge Bi and Bj to create a new internal node Bjj in T which will be the parent of Bi and Bj in the result tree. (5) Repeat from (2) until there is only one set Graph-Theoretic Clustering • Construct the minimal spanning tree (MST) • Delete the MST edges with the largest lengths Improving k-Means D. Pelleg and A. Moore, Accelerating Exact k-means Algorithms with Geometric Reasoning, ACM Proceedings of Conf. on Knowledge and Data Mining, 1999. • Definitions • Center of clusters ? (Th2) Center of rectangle • c1 dominates c2 w.r.t. h ? if h is in the same side as c1 wrt c2. (pg.7,9) • Update Centroid • If for all other centers c, c dominates c wrt h (so cowner(h), pg 10) ? insert into owner(h) or split h • (blacklist version) c1 dominates c2 wrt h for any h contained in h. (pg.11) Clustering Categorical Data ROCK • S. Guha, R. Rastogi, K. Shim, ROCK Robust Clustering using linKs, IEEE Conf Data Engineering, 1999 • Use links to measure similarity/proximity • Not distance based • Computational complexity • Basic ideas • Similarity function and neighbors • Let T1 1,2,3, T23,4,5 Using Jaccard Coefficient • According to Jaccard coefficient, the distance between 1,2,3 and 1,2,6 is the same as the one between 1,2,3 and 1,2,4, although the former is from two different clusters. lt1,2,3,4,5gt CLUSTER 1 1,2,3 1,4,5 1,2,4 2,3,4 1,2,5 2,3,5 1,3,4 2,4,5 1,3,5 lt1,2,6,7gt CLUSTER 2 1,2,6 1,2,7 1,6,7 2,6,7 • Inducing LINK the main problem is local properties involving only the two points are • Neighbor If two points are similar enough with each other, they are neighbors • Link the link for pair of points is the number of common neighbors. Rock Algorithm S. Guha, R. Rastogi, K. Shim, ROCK Robust Clustering using linKs, IEEE Conf Data Engineering, 1999 • Links The number of common neighbors for the two points. • Algorithm • Draw random sample • Cluster with links • Label data in disk 1,2,3, 1,2,4, 1,2,5, 1,3,4, 1,3,5 1,4,5, 2,3,4, 2,3,5, 2,4,5, 1,2,3 1,2,4 Rock Algorithm S. Guha, R. Rastogi, K. Shim, ROCK Robust Clustering using linKs, IEEE Conf Data Engineering, 1999 • Criterion function to maximize link for the k • Ci denotes cluster i of size ni. For the similarity threshold 0.5, link (1,2,6, 1,2,7) 4 link (1,2,6, 1,2,3) 3 link (1,6,7, 1,2,3) 2 link (1,2,3, 1,4,5) 1,2,3 1,4,5 1,2,4 2,3,4 1,2,5 2,3,5 1,3,4 2,4,5 1,3,5 3,4,5 1,2,6 1,2,7 1,6,7 2,6,7 More on Hierarchical Clustering Methods • Major weakness of agglomerative clustering • do not scale well time complexity of at least O(n2), where n is the number of total objects • can never undo what was done previously • Integration of hierarchical with distance-based • BIRCH (1996) uses CF-tree and incrementally adjusts the quality of sub-clusters • CURE (1998) selects well-scattered points from the cluster and then shrinks them towards the center of the cluster by a specified fraction Zhang, Ramakrishnan, Livny, Birch Balanced Iterative Reducing and Clustering using Hierarchies, ACM SIGMOD 1996. • Pre-cluster data points using CF-tree • For each point • CF-tree is traversed to find the closest cluster • If the threshold criterion is satisfied, the point is absorbed into the cluster • Otherwise, it forms a new cluster • Requires only single scan of data • Cluster summaries stored in CF-tree are given to main memory hierarchical clustering algorithm Initialization of BIRCH • CF of a cluster of n d-dimensional vectors, V1,,Vn, is defined as (n,LS, SS) • n is the number of vectors • LS is the sum of vectors • SS is the sum of squares of vectors • CF1CF2 (n1 n1 LS1 LS1, SS1 SS1) • This property is used for incremental maintaining cluster features • Distance between two clusters CF1 and CF2 is defined to be the distance between their Zhang, Ramakrishnan, Livny, Birch Balanced Iterative Reducing and Clustering using Hierarchies, ACM SIGMOD 1996. Clustering Feature Vector Clustering Feature CF (N, LS, SS) N Number of data points LS (linear sum of N data points) ?Ni1Xi SS (square sum of N data points CF (5, (16,30),(54,190)) (3,4) (2,6) (4,5) (4,7) (3,8) Zhang, Ramakrishnan, Livny, Birch Balanced Iterative Reducing and Clustering using Hierarchies, ACM SIGMOD 1996. • Given N d-dimensional data points in a cluster • Centroid X0, radius R, diameter D, controid Euclidian distance D0, centroid Manhattan distance D1 Notations (2) Zhang, Ramakrishnan, Livny, Birch Balanced Iterative Reducing and Clustering using Hierarchies, ACM SIGMOD 1996. • Given N d-dimensional data points in a cluster • Average inter-cluster distance D2, average intra-cluster distance D3, variance increase distance D4 CF Tree Zhang, Ramakrishnan, Livny, Birch Balanced Iterative Reducing and Clustering using Hierarchies, ACM SIGMOD 1996. B 7 L 6 Non-leaf node Leaf node Leaf node • Given (T2?), B3 for 3, 6, 8, and 1 • (2,(9, 45) ? (2,(4,10)), (2,(14,100)) • For 2 inserted ?(1,(2,4)) • (3,(6,14), (2,(14,100)) • (2,(3,5)), (1,(3,9)) (2,(14,100)) • For 5 inserted ?(1,(5,25)) • (3,(6,14), • (2,(3,5)), (1,(3,9)) (2,(11,61)), (1,(8,64)) • For 7 inserted ? (1,(7,49)) • (3,(6,14), • (2,(3,5)), (1,(3,9)) (2,(11,61)), Evaluation of BIRCH • Scales linearly finds a good clustering with a single scan and improves the quality with a few additional scans • Weakness handles only numeric data and sensitive to the order of the data record. Data Summarization • To compress the data into suitable representative • OPTICS Data Bubble Finding clusters from hierarchical clustering depending on the resolution M. Ankerst, M. Breunig, H. Kriegel, J. Sander, OPTICS Ordering Points to Identify the Clustering Structure, ACM SIGMOD, 1999. • Pre N?(q) the subset of D contained in the ?neighborhood of q. (? is a radius) • Definition 1 (directly density-reachable) Object p is directly density-reachable from object q wrt. ? and MinPts in a set of objects D if 1) p ? N,(q) (N?(q) is the subset of D contained in the ?-neighborhood of q.) 2) Card(N?(q)) gt MinPts (Card(N) denotes the cardinality of the set N) • Definitions • Directly density-reachable (p.51 Figure 2) ? density-reachable transitivity of ddr • Density-connected (p -gt o lt- q) • Core-distance ?, MinPts (p) MinPts_distance (p) • Reachability-distance ?, MinPts (p,o) wrt o max(core-distance(o), dist(o,p)) ? Figure 4 • Ex) cluster ordering ? reachability values Fig 12 Data Bubbles M. Breunig, H. Kriegel, P. Kroger, J. Sander, Data Bubbles Quality Preserving Performance Boosting for Hierarchical Clustering, ACM SIGMOD, • ?-neighborhood of P • k-distance of P, at least for k objects O ? D it holds d(P,O) d(P,O), and at most k-1 objects O ? D it holds d(P,O) lt d(P,O). • k-nearest neighbors of P • MinPts-dist(P) a distance in which there are at least MinPts objects within the ?-neighborhood of Data Bubbles M. Breunig, H. Kriegel, P. Kroger, J. Sander, Data Bubbles Quality Preserving Performance Boosting for Hierarchical Clustering, ACM SIGMOD, • Structural distortion • Figure 11 • Data Bubbles, B(n,rep,extend,nnDist) • n of objects in X rep a representative bject for X extent estimation of the radius of X nnDist partial function, estimating k-nearest neighbor distances in X. • Distance (B,C) page 6-83 Dist(B.rep, C.rep) - B.extent C.extend B.nnDist(1) C.nnDist(1) Max B.nnDist(1) C.nnDist(1) K-Means in SQL C. Ordonez, Integrating K-Means Clustering with a Relational DBMS Using SQL, IEEE TKDE 2006. • Dataset Yy1,y2,,yn d?n matrix, where yid?1 column vector • K-Means to find k clusters, by minimizing the square error from the centers. • Square distance, Eq(1) and objective fn, Eq(2) • Matrices • W k weights (fractions of n) d?k matrix • C k means (centroids) d?k matrix • R k variances (square distances) k?1 matrix • Matrices • Mj contains the d sums of point dimension values in cluster j d?k matrix • Qj contains the d sums of squared dimension values in cluster j d?k matrix • Nj contains points in cluster j k?1 matrix • Intermediate matrices YH, YV, YD, YNN, NMQ, WCR Figure 193 Y1 Y2 Y3 i Y1 Y2 Y3 l k C1/C2 i l val j Y1 Y2 Y3 Insert into C Select 1,1,Y1 From CH Where j1 Insert into C Select d,k,Yd From CH Where i j i d1 d2 Insert into YD Select i, sum(YV.val-C.C1)2) AS d1, sum(YV.val-C.Ck)2) AS dk FROM YV, C Where YV.l C.l Group by i Insert into YNN CASE When d1 lt d2 AND d1 lt dk Then 1 When d2 lt d3 .. Then 2 ELSE k l j N M Q l j W C R 1 1 0.4 1 0 2 1 0.4 2 0 3 1 0.4 3 0 1 2 0.6 9 0 2 2 0.6 8 0 3 2 0.6 7 0 Insert into MNQ Select l,j,sum(1.0) AS N, sum(YV.val) AS M, sum(YV.va.YV.val) AS Q FROM YV, YNN Where YV.i YNN.i GROUP by l,j Incremental Data Summarization S. Nassar, J. Sander, C. Cheng, Incremental and Effective Data Summarization for Dynamic Hierarchical Clustering, ACM SIGMOD, 2004. • For DXi for 1?i?N, ?data bubble, the data index ?i n/N. • For DXi with the mean ?X and standard deviation ?X, • ? is • good iff ???? - ?? , ?? ??, • under-filled iff ?lt ?? - ?? , and • over-filled iff ?gt ?? ??. Research Issues • Reduction Dimensions • Approximation (No Transcript) Cure The Algorithm Guha, Rastogi Shim, CURE An Efficient Clustering Algorithm for Large Databases, ACM SIGMOD, 1998 • Guha, Rastogi Shim, CURE An Efficient Clustering Algorithm for Large Databases, ACM SIGMOD, 1998 • Draw random sample s. • Partition sample to p partitions with size s/p • Partially cluster partitions into s/pq clusters • Eliminate outliers • By random sampling • If a cluster grows too slow, eliminate it. • Cluster partial clusters. • Label data in disk Data Partitioning and Clustering Guha, Rastogi Shim, CURE An Efficient Clustering Algorithm for Large Databases, ACM SIGMOD, 1998 Cure Shrinking Representative Points Guha, Rastogi Shim, CURE An Efficient Clustering Algorithm for Large Databases, ACM SIGMOD, 1998 • Shrink the multiple representative points towards the gravity center by a fraction of ?. • Multiple representatives capture the shape of the Density-Based Clustering Methods • Clustering based on density (local cluster criterion), such as density-connected points • Major features • Discover clusters of arbitrary shape • Handle noise • One scan • Need density parameters as termination condition • Several interesting studies • DBSCAN Ester, et al. (KDD96) • OPTICS Ankerst, et al (SIGMOD99). • DENCLUE Hinneburg D. Keim (KDD98) • CLIQUE Agrawal, et al. (SIGMOD98) CLIQUE (Clustering In QUEst) • Agrawal, Gehrke, Gunopulos, Raghavan, Automatic Subspace Clustering of High Dimensional Data for Data Mining Applications, ACM SIGMOD 1998. • Automatically identifying subspaces of a high dimensional data space that allow better clustering than original space • CLIQUE can be considered as both density-based and grid-based • It partitions each dimension into the same number of equal length interval • It partitions a d-dimensional data space into non-overlapping rectangular units • A unit is dense if the fraction of total data points contained in the unit exceeds the input model parameter • A cluster is a maximal set of connected dense units within a subspace Salary (10,000) ? 3 CLIQUE The Major Steps Agrawal, Gehrke, Gunopulos, Raghavan, Automatic Subspace Clustering of High Dimensional Data for Data Mining Applications, ACM SIGMOD 1998. • Partition the data space and find the number of points that lie inside each cell of the • Identify the subspaces that contain clusters using the Apriori principle • Identify clusters • Determine dense units in all subspaces of • Determine connected dense units in all subspaces of interests. • Generate minimal description for the clusters • Determine maximal regions that cover a cluster of connected dense units for each cluster • Determination of minimal cover for each cluster Strength and Weakness of CLIQUE • Strength • It automatically finds subspaces of the highest dimensionality such that high density clusters exist in those subspaces • It is insensitive to the order of records in input and does not presume some canonical data • It scales linearly with the size of input and has good scalability as the number of dimensions in the data increases • Weakness • The accuracy of the clustering result may be degraded at the expense of simplicity of the Model based clustering • Assume data generated from K probability • Typically Gaussian distribution Soft or probabilistic version of K-means clustering • Need to find distribution parameters. • EM Algorithm EM Algorithm • Initialize K cluster centers • Iterate between two steps • Expectation step assign points to clusters • Maximation step estimate model parameters CURE (Clustering Using Epresentatives ) • Guha, Rastogi Shim, CURE An Efficient Clustering Algorithm for Large Databases, ACM SIGMOD, 1998 • Stops the creation of a cluster hierarchy if a level consists of k clusters • Uses multiple representative points to evaluate the distance between clusters, adjusts well to arbitrary shaped clusters and avoids single-link Drawbacks of Distance-Based Method Guha, Rastogi Shim, CURE An Efficient Clustering Algorithm for Large Databases, ACM SIGMOD, 1998 • Drawbacks of square-error based clustering method • Consider only one point as representative of a • Good only for convex shaped, similar size and density, and if k can be reasonably estimated Zhang, Ramakrishnan, Livny, Birch Balanced Iterative Reducing and Clustering using Hierarchies, ACM SIGMOD 1996. • Dependent on order of insertions • Works for convex, isotropic clusters of uniform • Labeling Problem • Centroid approach • Labeling Problem even with correct centers, we cannot label correctly Jensen-Shannon Divergence • Jensen-Shannon(JS) divergence between two probability distributions • where • Jensen-Shannon(JS) divergence between a finite number of probability distributions Information-Theoretic Clustering (preserving mutual information) • (Lemma) The loss in mutual information equals • Interpretation Quality of each cluster is measured by the Jensen-Shannon Divergence between the individual distributions in the cluster. • Can rewrite the above as • Goal Find a clustering that minimizes the above Information Theoretic Co-clustering (preserving mutual information) • (Lemma) Loss in mutual information equals • where • Can be shown that q(x,y) is a maximum entropy approximation to p(x,y). • q(x,y) preserves marginals q(x)p(x) q(y)p(y) parameters that determine q are Preserving Mutual Information • Lemma • Note that may be thought of as the prototype of row cluster (the usual centroid of the cluster is • Similarly, Example Continued Co-Clustering Algorithm Properties of Co-clustering Algorithm • Theorem The co-clustering algorithm monotonically decreases loss in mutual information (objective function value) • Marginals p(x) and p(y) are preserved at every step (q(x)p(x) and q(y)p(y) ) • Can be generalized to higher dimensions (No Transcript) Applications -- Text Classification • Assigning class labels to text documents • Training and Testing Phases New Document Document collection Grouped into classes Classifier (Learns from Training data) New Document With Assigned class Training Data Dimensionality Reduction • Feature Selection • Feature Clustering • Select the best words • Throw away rest • Frequency based pruning • Information criterion based • pruning Document Bag-of-words Vector Of words Vector Of words • Do not throw away words • Cluster words instead • Use clusters as features Document Bag-of-words • Data sets • 20 Newsgroups data • 20 classes, 20000 documents • Classic3 data set • 3 classes (cisi, med and cran), 3893 documents • Dmoz Science HTML data • 49 leaves in the hierarchy • 5000 documents with 14538 words • Available at http//www.cs.utexas.edu/users/manyam • Implementation Details • Bow for indexing,co-clustering, clustering and Naïve Bayes with word clusters • Naïve Bayes classifier • Assign document d to the class with the highest • Relation to KL Divergence • Using word clusters instead of words • where parameters for clusters are estimated according to joint statistics Selecting Correlated Attributes T. Fukuda, Y. Morimoto, S. Morishita, T. Tokuyama, Constructing Efficient Decision Trees by Using Optimized Numeric Association Rules, Proc. of VLDB Conf., 1996. • To decide A and A are strongly correlated iff • where a threshold ? ? 1 MDL-based Decision Tree Pruning • M. Mehta, J. Rissanen, R. Agrawal, MDL-based Decision Tree Pruning, Proc. on KDD Conf., 1995. • Two steps for induction of decision trees • Construct a DT using training data • Reduce the DT by pruning to prevent overfitting • Possible approaches • Cost-complexity pruning using a separate set of samples for pruning • DT pruning using the same training data sets for • MDL-based pruning using Minimum Description Length (MDL) principle. Pruning Using MDL Principle M. Mehta, J. Rissanen, R. Agrawal, MDL-based Decision Tree Pruning, Proc. on KDD Conf., 1995. • View decision tree as a means for efficiently encoding classes of records in training set • MDL Principle best tree is the one that can encode records using the fewest bits • Cost of encoding tree includes • 1 bit for encoding type of each node (e.g. leaf or internal) • Csplit cost of encoding attribute and value for each split • nE cost of encoding the n records in each leaf (E is entropy) Pruning Using MDL Principle M. Mehta, J. Rissanen, R. Agrawal, MDL-based Decision Tree Pruning, Proc. on KDD Conf., 1995. • Problem to compute the minimum cost subtree at root of built tree • Suppose minCN is the cost of encoding the minimum cost subtree rooted at N • Prune children of a node N if minCN nE1 • Compute minCN as follows • N is leaf nE1 • N has children N1 and N2 minnE1,Csplit1minCN • Prune tree in a bottom-up fashion MDL Pruning - Example R. Rastogi, K. Shim, PUBLIC A Decision Tree Classifier that Integrates Building and Pruning, Proc. of VLDB Conf., 1998. • Cost of encoding records in N, (nE1) 3.8 • Csplit 2.6 • minCN min3.8,2.6111 3.8 • Since minCN nE1, N1 and N2 are pruned • R. Rastogi, K. Shim, PUBLIC A Decision Tree Classifier that Integrates Building and Pruning, Proc. of VLDB Conf., 1998. • Prune tree during (not after) building phase • Execute pruning algorithm (periodically) on partial tree • Problem how to compute minCN for a yet to be expanded leaf N in a partial tree • Solution compute lower bound on the subtree cost at N and use this as minCN when pruning • minCN is thus a lower bound on the cost of subtree rooted at N • Prune children of a node N if minCN nE1 • Guaranteed to generate identical tree to that generated by SPRINT R. Rastogi, K. Shim, PUBLIC A Decision Tree Classifier that Integrates Building and Pruning, Proc. of VLDB Conf., 1998. sal education Label 10K High-school Reject 40K Under Accept 15K Under Reject 75K grad Accept 18K grad Accept • Simple lower bound for a subtree 1 • Cost of encoding records in N nE1 5.8 • Csplit 4 • minCN min5.8, 4111 5.8 • Since minCN nE1, N1 and N2 are pruned • Theorem The cost of any subtree with s splits and rooted at node N is at least 2s1slog a • a is the number of attributes • k is the number of classes • ni (gt ni1) is the number of records belonging to class i • Lower bound on subtree cost at N is thus the minimum of • nE1 (cost with zero split) • 2s1slog a Whats Clustering • Clustering is a kind of unsupervised learning. • Clustering is a method of grouping data that share similar trend and patterns. • Clustering of data is a method by which large sets of data is grouped into clusters of smaller sets of similar data. • Example After clustering Thus, we see clustering means grouping of data or dividing a large data set into smaller data sets of some similarity. Partitional Algorithms • Enumerate K partitions optimizing some criterion • Example square-error criterion • Where x is the ith pattern belonging to the jth cluster and c is the centroid of the jth cluster. Squared Error Clustering Method 1. Select an initial partition of the patterns with a fixed number of clusters and cluster centers 2. Assign each pattern to its closest cluster center and compute the new cluster centers as the centroids of the clusters. Repeat this step until convergence is achieved, i.e., until the cluster membership is stable. 3. Merge and split clusters based on some heuristic information, optionally repeating step 2. Agglomerative Clustering Algorithm 1. Place each pattern in its own cluster. Construct a list of interpattern distances for all distinct unordered pairs of patterns, and sort this list in ascending order 2. Step through the sorted list of distances, forming for each distinct dissimilarity value dk a graph on the patterns where pairs of patterns closer than dk are connected by a graph edge. If all the patterns are members of a connected graph, stop. Otherwise, repeat this step. 3. The output of the algorithm is a nested hierarchy of graphs with can be cut at a desired dissimilarity level forming a partition identified by simply connected components in the corresponding graph. Agglomerative Hierarchical Clustering • Mostly used hierarchical clustering algorithm • Initially each point is a distinct cluster • Repeatedly merge closest clusters until the number of clusters becomes K • Closest dmean (Ci, Cj) • dmin (Ci, Cj) • Likewise dave (Ci, Cj) and dmax (Ci, Cj) • Summary of Drawbacks of Traditional Methods • Partitional algorithms split large clusters • Centroid-based method splits large and non-hyperspherical clusters • Centers of subclusters can be far apart • Minimum spanning tree algorithm is sensitive to outliers and slight change in position • Exhibits chaining effect on string of outliers • Cannot scale up for large databases Model-based Clustering • Mixture of Gaussians • Gaussian pdf P(?i) • Data point, N(?i,?2I) • Consider • Data points x1, x2,, xN • P(?1),, P(?k),? • Likelihood function • Maximize the likelihood function by calculating Overview of EM Clustering • Extensions and generalizations. The EM (expectation maximization) algorithm extends the k-means clustering technique in two important • Instead of assigning cases or observations to clusters to maximize the differences in means for continuous variables, the EM clustering algorithm computes probabilities of cluster memberships based on one or more probability distributions. The goal of the clustering algorithm then is to maximize the overall probability or likelihood of the data, given the (final) clusters. • Unlike the classic implementation of k-means clustering, the general EM algorithm can be applied to both continuous and categorical variables (note that the classic k-means algorithm can also be modified to accommodate categorical variables). EM Algorithm • The EM algorithm for clustering is described in detail in Witten and Frank (2001). • The basic approach and logic of this clustering method is as follows. • Suppose you measure a single continuous variable in a large sample of observations. • Further, suppose that the sample consists of two clusters of observations with different means (and perhaps different standard deviations) within each sample, the distribution of values for the continuous variable follows the normal • The resulting distribution of values (in the population) may look like this EM v.s. k-Means • Classification probabilities instead of classifications. The results of EM clustering are different from those computed by k-means clustering. The latter will assign observations to clusters to maximize the distances between clusters. The EM algorithm does not compute actual assignments of observations to clusters, but classification probabilities. In other words, each observation belongs to each cluster with a certain probability. Of course, as a final result you can usually review an actual assignment of observations to clusters, based on the (largest) classification probability. Finding k • V-fold cross-validation. This type of cross-validation is useful when no test sample is available and the learning sample is too small to have the test sample taken from it. A specified V value for V-fold cross-validation determines the number of random subsamples, as equal in size as possible, that are formed from the learning sample. The classification tree of the specified size is computed V times, each time leaving out one of the subsamples from the computations, and using that subsample as a test sample for cross-validation, so that each subsample is used V - 1 times in the learning sample and just once as the test sample. The CV costs computed for each of the V test samples are then averaged to give the V-fold estimate of the CV costs. Expectation Maximization • A mixture of Gaussians • Ex x130, P(x1)1/2 x218, P(x2)u x30, P(x3)2u x423, P(x4)1/2-3u • Likelihood for X1 a students x2 b students x3 c students x4 d students • To maximize L, calculate the log Likelihood L Supposing a14, b6, c9,d10, then u1/10. If x1x2 h students ? abh ? ah/(u1), b2uh/(u1) Gaussian (Normal) pdf • The Gaussian function with mean (?) and standard deviation (?). The properties of the function • symmetric about the mean • Gains its maximum value at the mean, the minimum value at plus and minus infinity • The distribution is often referred to as bell • At one standard deviation from the mean the function has dropped to about 2/3 of its maximum value, at two standard deviations it has falled to about a 1/7. • The area under the function one standard deviation from the mean is about 0.682. Two standard deviations it is 0.9545, and the three s.d. it is 0.9973. The total area under the curve is 1. Think the cumulative distribution, F?,?2(x) Multi-variate Density Estimation Mixture of Gaussians • contains all the parameters of the mixture model. pi are known as mixing proportions or • A mixture of Gaussians model • Generic mixture P(y) y1 y2 P(xy1) P(xy2) Mixture Density • If we are given just x we do not know which mixture component this example came from • We can evaluate the posterior probability that an observed x was generated from the first mixture User Comments (0)
{"url":"https://www.powershow.com/view4/5676a6-NDA3Y/Techniques_of_Classification_and_Clustering_powerpoint_ppt_presentation","timestamp":"2024-11-08T04:16:49Z","content_type":"application/xhtml+xml","content_length":"190545","record_id":"<urn:uuid:0c13a997-fbd0-4553-98f0-df827bd736cf>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00194.warc.gz"}
Model Theory for Algebra and Algebraic Geometry Model Theory for Algebra and Algebraic Geometry Spring 2010 Universite Paris-Sud Orsay Course Meeting: • Mon May 17, Wed May 19 • Wed May 26, Thu May 27 • Mon May 31, Wed June 2, Thu June 3 • Thu June 10 Monday and Wednesday lectures are at 10:00-12:00 in salle 113-115 Thursday lectures are 14:15-16:15 in salle 225-227. Instructor: David Marker e-mail: marker@math.uic.edu course webpage: http://www.math.uic.edu/~marker/orsay These lectures will be an introduction to some basic connections to algebra and algebraic geometry. The basic topics covered will include: • Logic, Language and Structures • The Compactness Theorem and applications • Ultraproducts and a proof of compactness • Ax's Theorem that injective polynomial maps are surjective • Quantifer elimination tests • the model theory of algebraically closed fields and algebraic geometry • the model theory of real closed fields and semialgebraic geometry I also intend to discuss some advanced topics including: • o-minimality, subanalytic geometry and exponentiation • Asymptotic bounds on the number of rational points on o-minimal sets and Diophantine applications • D. Marker, Model Theory: An Introduction, Graduate Texts in Mathematics 217, Springer, New York, 2002. Lecture Notes I will try to provide lecture notes for some of my lectures. The notes will contain some material that will not be covered in the lectures.
{"url":"http://homepages.math.uic.edu/~marker/orsay/","timestamp":"2024-11-01T20:40:12Z","content_type":"text/html","content_length":"3303","record_id":"<urn:uuid:723bc6d5-0a07-49a5-b887-8fd8403fa439>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00450.warc.gz"}
Products > Olis Olis is a graph management application. Its job is to help administrators of Knowledge Graphs manage them as a series of sub-graphs. Opposite of Silo Olis, “Silo”, reversed!, is the opposite of a Data Silo as there are as few barriers as possible to getting data in and out, and even moving away from an Olis-managed dataset. Olis does things to ensure you are completely in control of your data and not locked in by it: • Contains Schema in the data □ Olis manages data that contains its schemas within it, not somewhere else that can only be accessed through special tools □ Administrative & configuration data used by Olis is just more data in the same dataset and is accessible in the same way • Strongly defined □ All the parts of data managed by Olis - objects, their relations, class definitions - are defined in an open standards way, so no data elements have meaning or roles that are implicit or hidden and that can be misunderstood • Uses Standardised IO protocols □ All data within an Olis-managed dataset can be accessed via the SPARQL series of data standards, which includes not only a query language but protocols for lodging queries, receiving responses and dumping all data How it works Data in a Knowledge Graph can be segmented into sub-graphs in a manner similar to the way in which schemas in some relational database systems can segment data. Multiple graphs can then be used together to form the total Knowledge Graph but managed separately, if required. Olis provides a model and an API for managing Knowledge Graph sub-graphs. The Olis data model defines: • Real Graphs □ Knowledge Graph sub-graphs that contain data • Virtual Graphs □ Knowledge Graph sub-graphs that are aliases for other Real and Virtual graphs and contain none of their own data Using the Olis API, you can make Virtual Graphs for complex datasets that consist of potentially very many Real Graphs and other Virtual Graphs that segment the dataset’s data by time or some other A data stream supplies new data to a Knowledge Graph every day. Olis can be used to define a Virtual Graph for that stream and a Real Graph is created for incoming data each week. Data reprocessing or reasoning can be performed on weekly Real Graph ‘chunks’, rather than the whole dataset. This allows for a much more scalable and sustainable system.
{"url":"https://kurrawong.ai/products/olis/","timestamp":"2024-11-08T15:47:02Z","content_type":"text/html","content_length":"12064","record_id":"<urn:uuid:7973647a-d22e-45d2-9f29-c7e2d86f7e63>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00006.warc.gz"}
ULTIMATE SYMMETRY - I.2.2 The Speed of Light as the Fractal I.2.2 The Speed of Light as the Fractal Dimension It is quite obvious according to the Re-Creation Principle that there is a terminal cosmological velocity, since everything is brought into existence by the only one Single Monad, which is the Source of all individual monads. The speed of the Single Monad is a property of the complex time-time field that it creates through its continuous alternation, or the refresh rate of re-creation. This is what gives rise to Special and General Relativity, while also allowing instantaneous physical change, and not only transfer of information, because there is no real continuous motion in the common sense that the same object gradually leaves its place to occupy new places, but it is re-created in these new places which could be at the other end of the Universe right in the following instance, as it usually happens with the two entangled EPR particles, or in quantum tunneling. The standard value of the speed of light in vacuum is now considered a universal physical constant, and its exact value is meters per second. Since 1983, the length of the Meter has been defined from this constant as well as the international standard for time. However, this experimentally measured value corresponds to the speed of light in vacuum that is in fact not exactly empty. The true speed that should be considered as the upper Cosmological Speed is the speed of light in absolute void rather than vacuum , which still has some energy that may interact with the photons, but void is real nothing . Of course, even vacuum is very hard to achieve in labs, so void is really impossible. The speed of light in the theoretical void is the absolute speed of the Single Monad that is necessarily in void, because it is the whole existence. This ultimate Cosmological Speed can be calculated on the bases of metaphysical space-time structure, which is the complex-time geometry of the Duality of Time Theory. In the three-dimensional space that is evolving in time, the Cosmological Speed should be an integer ratio that is exactly equal to 3 , and it has no units because both space and time originate from the same movements of the Single Monad that performs six basic movements to create the three dimensions of space, and then one movement to display it as a single frame. Because we naturally distinguish between space and time, this speed must be measured in terms of meters per second, and it should be therefore exactly equal to meters per second. The difference between this theoretical value and the standard measured value is what accounts for the quantum vacuum in contrast to the absolute void that cannot be excited. Of course all this depends also on the actual definition of the meter, and also the second, which appears to be conventional, but in fact they are based on the same ancient Sumerian tradition, included in their sexagesimal system which is fundamentally related to the structure of space-time as we shall discuss further in section IV.4.4. It can be anticipated based on the above conclusion that the speed of light is directly related to the number dimensions, so for other fundamental interactions, that are expected to be in lower dimensions as we discussed in Chapter II, the value of the speed of light should be considered according to these dimensions, and this might correct the values of many important constants in the Standard Model. Dimensionality is a relative and dynamic property, so for the Single Monad, since its existence is always Real, in the inner dimensions, its wave-length is zero and its frequency and energy are infinite. Therefore, because everything is absolutely transparent with regard to the Single Monad, it appears as absolutely three-dimensional, whereas heavy particles and objects, such as the Earth, Sun and Stars, propagate in Aether which appears for them only two-dimensional super-fluid. The wave-lengths of particles are a reflection of how often they fluctuate between the inner and outer dimensions of time, also relative to the number of their individual monads or geometrical points and in what dimensions they extend, which eventually rules how they interact with other particles according to their own relative dimensions. These interactions are exhibited in various properties such as reflection, refraction, diffraction or interference, and polarization, which are all descriptions of how energy, or frequency, propagate in the various dimensions of the complex-time geometry.
{"url":"https://www.smonad.com/symmetry/book.php?id=26","timestamp":"2024-11-10T20:55:56Z","content_type":"text/html","content_length":"34822","record_id":"<urn:uuid:eb4078bb-41d6-4f3d-89ab-3d7099f06c3f>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00328.warc.gz"}
Where can I find experts to help me with my Operations Research homework problems? | Operations Management Assignment Help Where can I find experts to help me with my Operations Research homework problems? Find information on read here that works on: One of the most invaluable things you will ever get in More Info life is an ever-present knowledge. When you’re working on a strategy puzzle and you think about how your strategy is performing, you don’t think about this concept quite so much! So be sure to use that helpful knowledge! Fibre Project The small project found with the Fibre Project, the game you have to make to do operations research on. You take the step of playing the Fibre Project and you will be astonished to see how it moves over time! You will notice it is very powerful and helps you make some progress! Javascript The user gets a bit nervous just because it’s being operated. There are many variables and you need to worry about handling it. How often is the number you add it? Do you add it to every entry on the table? The system that builds them can only grow a little bit and as you do those changes of the total time have to be as big as ones coming into play. You’ll need to deal with that to make your game. To make things easy, once you’ve done lots of work trying to get to the point there is even more that has to be done! Everything is on your table! Can be really hard to get at now, but this will help! To write out quickly in advance, add the steps on her or yourself, I think that you can do this by yourself. If you have something in store you can add it too. Step 3 to Take a Survey Step 5 In general, take the survey, you won’t ever want to give it away. Find out which experts are doing it, find out what can be done to try to do the following steps in your problem: 1. Check out the data. Imagine that your goal is for having 10 things on her system and 10 lists of methods and things that can be solved in your development time. 2. Research how things like this can be done, check where to look and how to solve them. 3. Study the new techniques. 4. Study the problems. A lot of people do it as part of lessons they saw or how they constructed the solutions. 5. Great Teacher Introductions On The Syllabus Study the way to solve it. Learn an ancient way to solve the problems in your information collection. The answer to this is in your development time! Try to fill out a detailed problem file that will give a solution to your problem for later! The notes have the following: Note: Keep a long write-down in case you change your thinking and research around the idea or concept. If you’re a newbie then you might not want to have many ideas for many reasons. Also, remember to always include in your research so that the experts don’t give away your notes toWhere can I find experts to help me with my Operations Research homework problems? How to find people for my homework problems? WEST MILLENNIEWIST APPROACH Every day at our School I find a book that I was meant to read even when I wasn’t given time to. It starts off with statistics based on the fact that it’s pretty out there, but it’s also got to be interesting information to tell you what the next step can be. It’s very hard to even compare this thing with some of it’s pretty uninteresting stuff (like numbers and logic!). There are a lot of people out there who like to learn math and don’t really have the time to do cross-reference and understand the math. If you’re looking for someone who could totally do X, I’d recommend Kriek’s. It’s probably the one which is one of the best in my part of the world. I sort of recommend the book by the description you just read atleast. It seems like what best describes math and what is most useful for it a few decades ago. I do think that if people are involved in math classes I would be more interested in math than math is. A good enough teacher could even study mathematics. This could be a powerful interest to some, but it doesn’t sound like I want people trying to study math just to know the basics. I know a couple of people who have been used in Math Tutoring books but I can’t recall any instances where their student had to learn mathematics in math class when she wasn’t around trying to figure it out. Actually, after making my class schedule full, I needed a new teacher to assist me with the homework. My boss thought I should have a pencil and paper class. You can’t exactly tell about math in real life so I decided to try this one and made my have a peek at this website I was really lucky I didn’t have an elevator to buy new technology at the time and all of my class had math classes, so I didn’t have to take the credit cards to grade math at the time. I Need To Do My School Work The teacher we interviewed was so nice, I’m just not very interested in math. Now, as I was finishing my Math Tutoring before everything was ready, I didn’t know if there was a school in my town filled with Math Tutors or whether it was the school itself. So it was up to me to prove that I was right and then we would continue. Who knows what even made me decide to move forward. Learning was rewarding. It was exciting. But then when 2 days started I knew that the “no” wasn’t an answer for all of us. I got my lesson out of my lessons and started doing what I always wanted to do. I started the Math Tutoring class. I asked myself if that was obvious or do-able. I found a new teacher and some excuse. The teacher was good at math. He was kind. He made me solve the problem. The teacher taught everyoneWhere can I find experts to help me with my Operations Research homework problems? I am from Brazil and I have been looking for experts to help me determine what topics are relevant for my Operations Research study. Wittgenstein and Schrödinger proved that the question “How to get a chair at one of these programs with the subject of Inference” has non-fiction value. Who could help me with an Operations Research Question number that covered these topics? What about the research topic? Would I need them? Will there be examples of solutions to come? One such solution is to ask Google to do a data analysis of the Google book, or to track how many people share the concept of a “How to” for an Inference question? It would obviously be silly to ask them to do this, but the research question itself should help solve some of the concerns I am having. The solutions given may solve a lot of the other issues I am having or perhaps will solve a little confusion where the topic is not covered. Thanks in advance! What is a library for this question? A school course provided in the course would be particularly useful if there were a computer-based course as well. This could be a student-oriented course because it will require you to write a business logic or programming programming book, and be able to go through the paper-based papers on business logic and I Will Do Your Homework For Money Be that as it may, you may be asking yourself whether to write this question, which will be a useful course for your school to participate in. 1- No, for this question you should already have experience with classes like these. 1 Answer Do you have experience with databases and SASS? You may, of course, need to be familiar with either in-house databases or SASS. The topic of ‘How to’ (which can also include statistical learning, and the other hand, of course) will help increase your education and understanding. For that a better course plan needs to be developed. (3) How to add libraries More specifically, an out-of-the-box approach is required. We can only ask for the books that each program uses as they become available. Check it out. What is the most important point in library design? If a given library is not open, you may not be able to help guide what would happen to the existing libraries, including the source. Also if libraries are not checked out, you may not be able to help in searching for help on what are the other lines in the library. Do note that your situation may be different when there is no user-friendly library in the program. However, please check what my link do! The other features of a library (and preferably for any new library projects, as well as for other types) is just fine. 2- Or how to go about making it scalable up to the target size? How to start with a scalable database or SASS
{"url":"https://theoperationsmanagement.com/where-can-i-find-experts-to-help-me-with-my-operations-research-homework-problems","timestamp":"2024-11-11T08:36:09Z","content_type":"text/html","content_length":"146813","record_id":"<urn:uuid:08a670ef-f83a-4146-91de-3cb35f42ddda>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00252.warc.gz"}
Important Formulas for H.C.F and L.C.M – GkNape 1. Factors and Multiples: If number a divided another number b exactly, we say that a is a factor of b. In this case, b is called a multiple of a. 2. Highest Common Factor (H.C.F.) or Greatest Common Measure (G.C.M.) or Greatest Common Divisor (G.C.D.): The H.C.F. of two or more than two numbers is the greatest number that divides each of them exactly. There are two methods of finding the H.C.F. of a given set of numbers: I. Factorization Method: Express each one of the given numbers as the product of prime factors. The product of the least powers of common prime factors gives H.C.F. II. Division Method: Suppose we have to find the H.C.F. of two given numbers, divide the larger by the smaller one. Now, divide the divisor by the remainder. Repeat the process of dividing the preceding number by the remainder last obtained till zero is obtained as remainder. The last divisor is required H.C.F. Finding the H.C.F. of more than two numbers: Suppose we have to find the H.C.F. of three numbers, then, H.C.F. of [(H.C.F. of any two) and (the third number)] gives the H.C.F. of three given number. Similarly, the H.C.F. of more than three numbers may be obtained. 3. Least Common Multiple (L.C.M.): The least number which is exactly divisible by each one of the given numbers is called their L.C.M. There are two methods of finding the L.C.M. of a given set of numbers: I. Factorization Method: Resolve each one of the given numbers into a product of prime factors. Then, L.C.M. is the product of the highest powers of all the factors. II. Division Method (short-cut): Arrange the given numbers in a row in any order. Divide by a number which divided exactly at least two of the given numbers and carries forward the numbers which are not divisible. Repeat the above process till no two of the numbers are divisible by the same number except 1. The product of the divisors and the undivided numbers is the required L.C.M. of the given 4. Product of two numbers = Product of their H.C.F. and L.C.M. 5. Co-primes: Two numbers are said to be co-prime if their H.C.F. is 1. 6. H.C.F. and L.C.M. of Fractions: 7. H.C.F. and L.C.M. of Decimal Fractions: In a given number, make the same number of decimal places by annexing zeros in some numbers, if necessary. Considering these numbers without a decimal point, find H.C.F. or L.C.M. as the case may be. Now, in the result, mark off as many decimal places as are there in each of the given numbers. 8. Comparison of Fractions: Find the L.C.M. of the denominators of the given fractions. Convert each of the fractions into an equivalent fraction with L.C.M as the denominator, by multiplying both the numerator and denominator by the same number. The resultant fraction with the greatest numerator is the greatest. No comments yet. Why don’t you start the discussion?
{"url":"https://gknape.com/general-knowledge/important-formulas-for-h-c-f-and-l-c-m/","timestamp":"2024-11-06T20:13:10Z","content_type":"text/html","content_length":"55151","record_id":"<urn:uuid:06c1bae4-9b1a-4f1b-adc6-92c766af1f3d>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00024.warc.gz"}
The Stacks project Lemma 51.4.7. Let $I \subset A$ be a finitely generated ideal of a ring $A$. If $M$ is a finite $A$-module, then $H^ i_{V(I)}(M) = 0$ for $i > \dim (\text{Supp}(M))$. In particular, we have $\text {cd}(A, I) \leq \dim (A)$. Proof. We first prove the second statement. Recall that $\dim (A)$ denotes the Krull dimension. By Lemma 51.4.6 we may assume $A$ is local. If $V(I) = \emptyset $, then the result is true. If $V(I) \ not= \emptyset $, then $\dim (\mathop{\mathrm{Spec}}(A) \setminus V(I)) < \dim (A)$ because the closed point is missing. Observe that $U = \mathop{\mathrm{Spec}}(A) \setminus V(I)$ is a quasi-compact open of the spectral space $\mathop{\mathrm{Spec}}(A)$, hence a spectral space itself. See Algebra, Lemma 10.26.2 and Topology, Lemma 5.23.5. Thus Cohomology, Proposition 20.22.4 implies $H^ i(U, \ mathcal{F}) = 0$ for $i \geq \dim (A)$ which implies what we want by Lemma 51.4.1. In the Noetherian case the reader may use Grothendieck's Cohomology, Proposition 20.20.7. We will deduce the first statement from the second. Let $\mathfrak a$ be the annihilator of the finite $A$-module $M$. Set $B = A/\mathfrak a$. Recall that $\mathop{\mathrm{Spec}}(B) = \text{Supp}(M) $, see Algebra, Lemma 10.40.5. Set $J = IB$. Then $M$ is a $B$-module and $H^ i_{V(I)}(M) = H^ i_{V(J)}(M)$, see Dualizing Complexes, Lemma 47.9.2. Since $\text{cd}(B, J) \leq \dim (B) = \dim (\text {Supp}(M))$ by the first part we conclude. $\square$ Your email address will not be published. Required fields are marked. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). All contributions are licensed under the GNU Free Documentation License. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 0DXC. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 0DXC, in case you are confused.
{"url":"https://stacks.math.columbia.edu/tag/0DXC","timestamp":"2024-11-14T04:59:11Z","content_type":"text/html","content_length":"15563","record_id":"<urn:uuid:91ae4685-991d-49df-8cf1-cf0eb1e7c344>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00666.warc.gz"}
Exploring Geometry in Mathematics: A Comprehensive Guide Geometry is a branch of mathematics that deals with the properties and relations of points, lines, surfaces, and solids. It’s a subject that permeates every aspect of our daily lives, from the shapes of objects we interact with to the design of buildings and the technology we use. In this comprehensive guide, we will explore the fascinating world of geometry, its key concepts, theorems, and practical applications. Let’s embark on this mathematical journey together! 1. Understanding Geometry What is Geometry? Geometry is the study of shapes, sizes, and the properties of space. It involves understanding the relationships between different geometric figures and using these relationships to solve problems. Geometry has a long history, dating back to ancient civilizations where it was used for land surveying, architecture, and astronomy. Importance of Geometry in Mathematics Geometry is fundamental to many fields, including art, engineering, physics, and computer science. It helps us understand and describe the world around us in a precise and logical way. The principles of geometry are used in designing buildings, creating computer graphics, and even in navigating space. 2. Basic Geometric Concepts Points, Lines, and Planes The most basic concepts in geometry are points, lines, and planes. A point represents a location in space and has no size. A line is a straight one-dimensional figure that extends infinitely in both directions. A plane is a flat two-dimensional surface that extends infinitely in all directions. Angles and Their Types An angle is formed by two rays with a common endpoint, called the vertex. Angles are measured in degrees or radians. There are several types of angles: acute (less than 90 degrees), right (exactly 90 degrees), obtuse (between 90 and 180 degrees), and straight (exactly 180 degrees). 3. Polygons and Their Properties Types of Polygons Polygons are two-dimensional shapes with straight sides. They are classified based on the number of sides they have. Common polygons include triangles, quadrilaterals, pentagons, hexagons, and so on. Each type of polygon has its own set of properties and formulas for calculating area and perimeter. Regular vs. Irregular Polygons Regular polygons have all sides and angles equal, such as an equilateral triangle or a square. Irregular polygons have sides and angles that are not all the same. Understanding the properties of both regular and irregular polygons is essential for solving various geometric problems. 4. Triangles: The Building Blocks of Geometry Types of Triangles Triangles are the simplest polygons and have three sides and three angles. They are classified based on their sides and angles: equilateral (all sides and angles equal), isosceles (two sides and two angles equal), and scalene (all sides and angles different). Understanding these classifications is crucial for studying more complex geometric shapes. Triangle Theorems Several important theorems relate to triangles, including the Pythagorean theorem, which states that in a right triangle, the square of the hypotenuse is equal to the sum of the squares of the other two sides. Other key theorems include the Triangle Inequality Theorem and the properties of similar and congruent triangles. 5. Circles and Their Properties Understanding Circles A circle is a set of all points in a plane that are a fixed distance from a center point. This distance is called the radius. The diameter of a circle is twice the radius, and the circumference is the distance around the circle. Circles have unique properties and formulas for calculating their area and circumference. Arcs, Chords, and Sectors Arcs are portions of a circle’s circumference, chords are line segments with endpoints on the circle, and sectors are regions bounded by two radii and an arc. Understanding these components is essential for solving problems involving circles and their properties. 6. Quadrilaterals and Their Types Types of Quadrilaterals Quadrilaterals are four-sided polygons. Common types include squares, rectangles, parallelograms, trapezoids, and rhombuses. Each type has specific properties related to the lengths of sides, angles, and parallelism. These properties are useful for solving geometric problems involving quadrilaterals. Properties and Formulas Each type of quadrilateral has its own set of formulas for calculating area and perimeter. For example, the area of a rectangle is found by multiplying its length by its width, while the area of a trapezoid is found using the formula (base1+base2)/2×height(base1 + base2) / 2 \times height(base1+base2)/2×height. Understanding these formulas is key to mastering geometry. 7. Transformations in Geometry Types of Transformations Geometric transformations include translations (sliding a shape), rotations (turning a shape), reflections (flipping a shape), and dilations (resizing a shape). These transformations can change the position, orientation, and size of shapes while preserving certain properties. Properties of Transformations Understanding the properties of transformations helps in solving problems related to congruence and similarity. For example, translations and rotations preserve the size and shape of figures, making them congruent to the original. Dilations change the size but not the shape, creating similar figures. 8. Coordinate Geometry The Cartesian Plane Coordinate geometry, also known as analytic geometry, involves using a coordinate plane to represent geometric figures. The Cartesian plane consists of an x-axis and y-axis, with points represented by ordered pairs (x,y)(x, y)(x,y). This system allows for precise calculation of distances, slopes, and midpoints. Equations of Lines and Curves In coordinate geometry, lines can be represented by equations such as y=mx+by = mx + by=mx+b, where mmm is the slope and bbb is the y-intercept. Curves, like circles and parabolas, also have specific equations. Understanding these equations is crucial for solving geometric problems on the coordinate plane. 9. Solid Geometry Types of Solids Solid geometry deals with three-dimensional shapes, such as cubes, spheres, cylinders, and cones. Each type of solid has specific properties and formulas for calculating volume and surface area. For example, the volume of a sphere is (4/3)Ï€r3(4/3)\pi r^3(4/3)Ï€r3, while the volume of a cylinder is Ï€r2h\pi r^2 hÏ€r2h. Surface Area and Volume Understanding how to calculate the surface area and volume of solids is important for solving real-world problems. These calculations are used in fields like architecture, engineering, and manufacturing to design and build structures and objects. 10. Geometric Constructions Basic Constructions Geometric constructions involve creating shapes, angles, and lines using only a compass and straightedge. Basic constructions include bisecting angles, drawing perpendicular lines, and constructing equilateral triangles. These techniques are foundational skills in geometry. Applications of Constructions Geometric constructions have practical applications in design, art, and engineering. For example, architects use geometric constructions to create accurate blueprints, and artists use them to achieve precise proportions in their work. 11. Theorems and Proofs Understanding Theorems Theorems are statements that have been proven to be true based on previously established statements and axioms. Examples include the Pythagorean Theorem, theorems about angles in a triangle, and the properties of parallel lines. Understanding these theorems is essential for solving geometric problems. Writing Proofs Proofs are logical arguments that demonstrate the truth of a theorem. They involve a series of statements and reasons that follow from axioms, definitions, and previously proven theorems. Learning to write proofs is a critical skill in geometry and helps develop logical thinking. 12. Trigonometry in Geometry Basic Trigonometric Functions Trigonometry deals with the relationships between the angles and sides of triangles. The basic trigonometric functions are sine, cosine, and tangent, which relate the angles of a triangle to the ratios of its sides. These functions are essential for solving problems involving right triangles. Applications of Trigonometry Trigonometry has numerous applications in geometry, including solving problems involving triangles, calculating heights and distances, and modeling periodic phenomena. It is also used in fields like physics, engineering, and astronomy. 13. Non-Euclidean Geometry Types of Non-Euclidean Geometry Non-Euclidean geometry explores geometries that differ from the traditional Euclidean geometry. Examples include hyperbolic and elliptic geometry. These geometries have different properties and rules, providing alternative ways of understanding space. Applications of Non-Euclidean Geometry Non-Euclidean geometry has applications in physics, particularly in the theory of relativity. It also plays a role in computer graphics and other areas where different models of space are useful for solving complex problems. 14. Geometry in the Real World Architecture and Design Geometry is fundamental to architecture and design. Architects use geometric principles to create aesthetically pleasing and structurally sound buildings. Designers use geometry to create patterns, layouts, and proportions in their work. Technology and Engineering In technology and engineering, geometry is used to design and manufacture everything from computer chips to bridges. Understanding geometric principles is essential for creating efficient, functional, and innovative solutions in these fields. 15. Teaching and Learning Geometry Effective Teaching Strategies Teaching geometry effectively involves using visual aids, hands-on activities, and real-world examples. Encouraging students to explore and discover geometric principles helps them develop a deeper understanding and appreciation for the subject. Resources for Learning Geometry There are many resources available for learning geometry, including textbooks, online courses, and interactive software. Using a variety of resources can help reinforce concepts and provide different perspectives on the subject. Geometry is a rich and fascinating field of mathematics that has practical applications in numerous areas of life. From understanding the basic properties of shapes to exploring advanced concepts like non-Euclidean geometry, this subject offers endless opportunities for discovery and learning. By mastering the principles of geometry, you can enhance your problem-solving skills, appreciate the beauty of the world around you, and apply mathematical concepts to real-world situations. Start your journey into the world of geometry today and unlock the endless possibilities it holds. 1. What is geometry? Geometry is a branch of mathematics that deals with the properties and relations of points, lines, surfaces, and solids. It involves understanding shapes, sizes, and the properties of space. 2. Why is geometry important? Geometry is important because it helps us understand and describe the world around us in a precise and logical way. It is fundamental to fields such as art, engineering, physics, and computer 3. What are the basic concepts of geometry? The basic concepts of geometry include points, lines, and planes. A point represents a location, a line is a one-dimensional figure, and a plane is a two-dimensional surface. 4. What are the types of angles in geometry? The types of angles in geometry include acute angles (less than 90 degrees), right angles (exactly 90 degrees), obtuse angles (between 90 and 180 degrees), and straight angles (exactly 180 degrees). 5. What are polygons? Polygons are two-dimensional shapes with straight sides. They are classified based on the number of sides they have, such as triangles, quadrilaterals, pentagons, and hexagons. 6. How are triangles classified? Triangles are classified based on their sides and angles. Equilateral triangles have all sides and angles equal, isosceles triangles have two sides and two angles equal, and scalene triangles have all sides and angles different. 7. What is the Pythagorean Theorem? The Pythagorean Theorem states that in a right triangle, the square of the hypotenuse (the side opposite the right angle) is equal to the sum of the squares of the other two sides. 8. What are the key properties of circles? Key properties of circles include the radius (distance from the center to any point on the circle), diameter (twice the radius), and circumference (distance around the circle). 9. What is coordinate geometry? Coordinate geometry, also known as analytic geometry, uses a coordinate plane to represent geometric figures. It involves equations and calculations to describe points, lines, and curves. 10. How is geometry used in the real world? Geometry is used in various real-world applications, including architecture, design, technology, and engineering. It helps create structures, design objects, and solve practical problems. Add a Comment
{"url":"https://mathematicalexplorations.co.in/exploring-geometry-in-mathematics/","timestamp":"2024-11-08T13:58:04Z","content_type":"text/html","content_length":"251422","record_id":"<urn:uuid:3d8571ab-0124-4202-8592-b8385445ecc7>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00437.warc.gz"}
Quantifying Sets: Cardinality, Countability, and Distribution for "How Many Are" Questions Quantifying Sets: Cardinality, Countability, And Distribution For “How Many Are” Questions “How many are” is a question answered by mathematical concepts like cardinality, counting, and distribution. Cardinality measures numerical size, while countability determines if a set can be matched with natural numbers. Multiplicity examines repeated occurrences, while distribution-focused concepts like prevalence, rarity, frequency, and magnitude capture the spread and occurrence of elements within a set. Understanding these concepts empowers us to quantify and compare the size and distribution of sets, providing a solid framework for answering “how many are” questions accurately and Unlocking the Mystery of “How Many Are”: Exploring Cardinality and Countability In the tapestry of human language, the question “how many are” weaves through our daily conversations. It’s a fundamental inquiry that helps us navigate the world around us, from counting the apples in a basket to understanding the vastness of the cosmos. But behind the simplicity of the question lies a rich mathematical framework that unveils the complexities of cardinality and countability. Cardinality: A Measure of Numerical Size At the heart of answering “how many are” lies the concept of cardinality. It’s the measure of the number of elements in a set. Imagine a set of seashells, each one a unique treasure. Cardinality tells us how many of these shimmering wonders fill the collection. Countability, a crucial companion to cardinality, comes into play when we want to determine whether a set can be put in a one-to-one correspondence with the natural numbers (1, 2, 3,…). Countable sets behave like a well-organized queue, where each element can be assigned a distinct number. Non-countable sets, on the other hand, defy such alignment, like an unruly crowd where some remain Beyond Cardinality: Exploring Multiplicity and Distribution Our exploration continues with multiplicity, the number of times an element appears in a set. This concept is particularly useful when we want to identify the frequency of specific elements, distinguishing the common from the rare. But cardinality and multiplicity are just pieces of the puzzle. To fully understand the distribution of elements within a set, we need to delve into prevalence, rarity, and frequency. Prevalence measures how widespread an element is, while rarity gauges its infrequency. Frequency, on the other hand, tracks the occurrence of an element over time or intervals. The dance between these concepts unfolds in countless real-world applications. In ecology, for instance, abundance and scarcity shape the delicate balance of biodiversity, determining the coexistence and survival of species. In finance, the magnitude of investments and their frequency influence market dynamics, steering the flow of wealth and shaping economies. Cardinality: Unveiling the Numerical Strength of Sets In our everyday interactions, we often inquire about the size or quantity of objects or events. Whether it’s counting the number of people in a room or measuring the length of a road, we seek to quantify the extent of what we perceive. Mathematics provides us with a powerful framework to address these questions through the concept of cardinality. Cardinality, in its essence, measures the numerical size of a set. A set, in mathematical terms, is a well-defined collection of distinct objects. The cardinality of a set represents the precise number of elements that constitute it. This concept forms the cornerstone of our ability to compare the sizes of different sets and to determine whether they have an equal number of elements. In order to determine the cardinality of a set, we employ the idea of countability. A set is considered countable if its elements can be put into a one-to-one correspondence with the natural numbers (1, 2, 3, …). This means that each element in the set can be uniquely paired with a natural number, allowing us to establish a clear order and determine its numerical size. Countable sets are often referred to as finite or infinite, depending on whether they have a finite or an infinite number of elements, respectively. Finiteness is a crucial aspect of cardinality, as it establishes that a set has a specific, well-defined number of elements. For instance, a set containing the numbers {1, 3, 5} has a cardinality of 3, indicating that it comprises exactly 3 distinct elements. Conversely, an infinite set, such as the set of natural numbers, has an infinite cardinality, as its elements cannot be exhaustively listed and counted. Understanding cardinality is essential for comparing the sizes of sets. By determining the cardinality of each set, we can establish whether they have the same number of elements, making them equal in size. Cardinality also enables us to perform mathematical operations involving sets, such as set union and intersection, which rely on the number of elements in each set. In summary, cardinality provides a precise measure of the numerical size of sets. Through its connection to countability, finiteness, and the comparison of sets, cardinality empowers us to quantify and compare the extent of mathematical entities, unveiling the numerical strength and characteristics of the sets we encounter in the world around us. Multiplicity: Counting the Repeated Encounters In our everyday conversations, we often ask questions like: “How many cats are there in this room?” or “How many times have I watched my favorite movie?” Answering these questions requires an understanding of a mathematical concept called multiplicity, which represents the number of times an element appears in a set. Multiplicity is closely related to two other important concepts: cardinality and abundance. Cardinality refers to the total number of elements in a set, while abundance measures how widespread an element is within that set. For example, if we have a room with 5 cats, the cardinality of the set of cats is 5, and the abundance of cats in the room is also 5 (since all 5 cats are present). However, multiplicity goes beyond cardinality by considering repeat occurrences. Suppose we have a set of numbers: {1, 2, 2, 3, 4}. The cardinality of this set is 4, indicating that there are 4 unique numbers. However, the multiplicity of the number 2 is 2, meaning it appears twice in the set. Understanding multiplicity is essential in various fields. In ecology, abundance and scarcity of species impact biodiversity and ecosystem stability. In finance, the magnitude of investments and their frequency influence market dynamics. By exploring multiplicity and its connections to cardinality, abundance, and other related concepts, we gain a deeper understanding of the quantitative nature of the world around us. This understanding allows us to ask more informed questions and make more accurate statements about the size and distribution of sets, making it a fundamental building block in many disciplines. Unveiling the Rich Tapestry of Distribution-Focused Concepts Beyond the fundamental concept of cardinality, which measures the sheer number of elements in a set, mathematicians have developed an array of sophisticated concepts that delve into the intricate distribution of these elements. Prevalence: Unveiling the Omnipresence Prevalence captures the extent to which an element graces the set. It measures the widespread presence of an element, revealing its ubiquity or scarcity. For instance, if a particular species is found in numerous habitats across a vast geographical area, it exhibits high prevalence. Rarity: Spotlighting the Elusive Rarity stands in stark contrast to prevalence. It quantifies the uncommonness or infrequency of an element’s occurrence. Think of a rare species that inhabits a small, secluded ecosystem. Its rarity underscores its uniqueness and the challenges of encountering it. Frequency: Tracking Rhythmic Appearances Frequency chronicles the repetition of an event or element over time or within specific intervals. It unravels the temporal distribution, revealing patterns and regularities. For example, the frequency of rainfall in a region provides insights into its climate and seasonal variations. Magnitude: Exploring Comparative Significance Magnitude delves into the relative size or extent of an element in relation to others within the set. It allows us to compare and contrast elements, revealing their relative importance or influence. Imagine a set of investments with varying returns; their magnitudes provide valuable information for decision-making. Intertwined Concepts: A Symphony of Insights These distribution-focused concepts are not isolated entities but rather form an interwoven tapestry. Each concept contributes a unique perspective, shedding light on different aspects of the set’s structure and composition. Prevalence and rarity complement each other, highlighting the extremes of distribution. Frequency tracks temporal patterns, while magnitude uncovers comparative relationships. Together, they provide a comprehensive understanding of how elements are distributed within a set. By delving into the nuances of distribution-focused concepts, we unlock a deeper understanding of the world around us. From ecological diversity to financial markets, these concepts empower us to quantify, compare, and interpret the distribution of elements, providing a powerful lens for unraveling the complexities of our universe. Interconnections and Applications: The Power of Quantification in the Real World The mathematical concepts we’ve explored—cardinality, countability, and others—are not merely abstract ideas. They have profound implications and applications in various fields, shedding light on the “how many are” questions that permeate our world. In ecology, for instance, understanding abundance and scarcity is crucial for studying biodiversity. Abundant species are widespread and numerous, while scarce species are rare and localized. Recognizing these patterns allows ecologists to assess ecosystem health, identify endangered species, and implement conservation strategies. In the realm of finance, the concepts of magnitude and frequency of investments play a pivotal role in market dynamics. Magnitude refers to the size of an investment, while frequency indicates how often investments are made. These factors influence the risk and return profiles of financial portfolios, enabling investors to make informed decisions based on their risk tolerance and investment Beyond these specific examples, the interconnected nature of these concepts extends to countless other fields, including: • Sociology: Analyzing population growth and distribution • Data Science: Quantifying data size and occurrence • Healthcare: Measuring disease prevalence and treatment efficacy • Engineering: Assessing component quantities and failure rates By understanding these concepts and their interconnections, we gain a deeper appreciation for the power of quantification. It enables us to measure, compare, and analyze the size and distribution of sets, providing valuable insights into diverse phenomena and empowering us to make informed decisions in various domains. Leave a Comment
{"url":"https://sciencemind.blog/quantifying-sets-cardinality-countability-distribution-how-many-are-questions/","timestamp":"2024-11-12T23:18:13Z","content_type":"text/html","content_length":"140099","record_id":"<urn:uuid:820d26b7-6006-4dac-ab90-c6865cf27a05>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00731.warc.gz"}
Mathematics of the Jewish Calendar/The Atbash - Wikibooks, open books for an open world The Atbash Once the day of the week of Rosh Hashana is known, so is the day of the week of every date from the previous 1st Adar (Adar Sheni in a leap year) until the next 29th Cheshvan. For dates before or after that, variations are possible depending on whether the previous year was a leap year and whether the current year is defective, regular or abundant (affecting the number of days in Cheshvan and A well-known mnemonic for calculating days of the week is the Calendar Atbash. An Atbash is a simple cypher where the first letter of the alphabet is replaced by the last, the second by the next to last, and so on. Thus Aleph is replaced by Tav, Beth by Shin and so on; this gives the acronym Atbash. Applying the Atbash to the first seven days of Pesach, we get 1. Aleph - Tav - Tisha B'Av 2. Beth - Shin - Shavuot 3. Gimel - Resh - Rosh Hashana 4. Daled - Kuf - Keriat Hatorah, i.e. Simchat Torah, a day devoted to Keriat ("reading of") the Torah 5. He - Tzadi - Yom Tzom Kippur, the Day of the Fast of Atonement 6. Vav - Pe - Purim 7. Zayin - Ayin - Yom ha-Atzmaut, Israel Independence Day This is to be read "The first day of Pesach is on the same day of the week as the date beginning Tav, i.e. Tisha b'Av", etc. (The first line is spoilt if that day is Shabbat so that the fast has to be postponed to Sunday.) Israel Independence Day may also be moved. Note that the Atbash remained incomplete until the creation of the State of Israel meant that this new festival was created. Since Rosh Hashana cannot fall on any of the three days Sunday, Wednesday or Friday, there are likewise three days of the week on which any given date from 1st Adar (Adar Sheni in a leap year) until 29th Cheshvan cannot fall. For dates before or after that, the situation is more complex; it is necessary to check the details of all fourteen possible types of year. Forbidden weekdays for some important dates are: • Fast of Esther: Sunday, Tuesday, Friday; if it falls on Saturday (Shabbat), it is observed on Thursday instead. (It cannot be postponed until Sunday, as that day is Purim). • Purim (14 Adar): Saturday, Monday, Wednesday (so Purim cannot fall on Shabbat except in places such as Jerusalem where it is observed a day late). Lag b'Omer is on the same day of the week as • Pesach (1st day): Monday, Wednesday, Friday. • Israel Independence Day (normally 5th Iyar, but movable): is subject to special rules, which have changed over the years; it can now only fall on Tuesday, Wednesday or Thursday. • Shavuot (1st day): Tuesday, Thursday, Saturday. Next Hoshana Rabba is on the same day of the week as Shavuot. • Fasts of Tammuz and Av: Monday, Wednesday, Friday; if they fall on Saturday (Shabbat) they are postponed to Sunday • Rosh Hashana (1st day), 1st day Succot, Shemini Atzeret; Sunday, Wednesday, Friday. • Fast of Gedaliah: Sunday, Tuesday, Friday, but if it falls on Saturday (Shabbat) it is postponed to Sunday • Yom Kippur: Sunday, Tuesday, Friday. • 1st day Chanukah: Tuesday. • Fast of Tevet: Monday or Saturday; it can never be Wednesday in an ordinary year, or Thursday in a leap year. It is the only public fast that can fall on Friday. • New Year for Trees: Sunday or Friday; it can never be Tuesday in an ordinary year, or Wednesday in a leap year. • Purim Katon: Monday, Thursday, Saturday.
{"url":"https://en.m.wikibooks.org/wiki/Mathematics_of_the_Jewish_Calendar/The_Atbash","timestamp":"2024-11-08T04:20:55Z","content_type":"text/html","content_length":"24472","record_id":"<urn:uuid:2e215c93-aeb6-416c-80fa-e42979627d1c>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00520.warc.gz"}
How big of a chicken coop do you need for 25 chickens? How big of a chicken coop do you need for 25 chickens? 50-100 square feet As we mention in our Chicken Coop Buyer’s Guide, you need somewhere between 2 and 4 square feet per standard size chicken in order for them to live comfortable, healthy and happy lives. So, your coop needs the following amount of square feet: 20 Chickens: 40-80 square feet. 25 Chickens: 50-100 square feet. How big of a chicken coop do you need for 24 chickens? Try to plan for at least 10 square feet of outdoor space per chicken. But really, the more space you can provide, the happier your chickens will be. In addition to outdoor space, your coop should have roosting bars—preferably at least eight to 12 inches per bird—so they can sleep comfortably at night. How much coop space does a bantam chicken need? 2 square feet Bantams, being smaller, don’t need as much space per bird. This is one reason they are popular in backyard flocks. 2 square feet per bird is adequate if they are allowed daytime forage, so a 4′ by 8′ coop could house 16 bantams. How many chickens can you put in a 4×5 coop? Our 4′ x 5′ Lean To Coop Specs at a Glance: Estimated space for 8 to 10 chickens. How many nesting boxes do I need for 25 chickens? In fact, one six-hole nest box would probably be sufficient for 25 laying hens, or 6 extremely pampered laying hens. How many bantams should I get? Chickens are social birds and they do not fare well on their own, so you should have a minimum of three. Anything less than 3 can cause stress in chickens. How many chickens can you put in a 4×6 coop? 15 chickens Cottage Style 4×6 Chicken Coop (up to 15 chickens) How many nesting boxes do I need for 24 chickens? How many: You do not need a nest box for every hen, but you also don’t want to provide too few boxes, which can increase the likelihood of drama in your flock and could lead to broken eggs or “yard eggs” being laid outside the nesting boxes. Usually, one nest box for every 4-5 hens is enough. How many laying boxes do I need for 20 chickens? A good rule of thumb is a ratio of one nesting box for every four chickens. How big of shed do I need for 20 chickens? As far as floor space in your coop goes, you’ll want to allow for 3-4 square feet per chicken. In addition to the regular “human-sized” door in your coop, you’ll likely also want a smaller “chicken-sized” doorway for your flock to use to access their pen. How many chickens can a 4×4 Coop hold? 4×4 Chicken Coop Considerations It is important to note that a 4×4 coop is not for everyone. It could only house around four to six chickens. Don’t overcrowd your coop. Chickens need their space, and they could get stressed if they don’t get enough room. How many chickens will a 8×8 Coop hold? Houses up to 32 chickens The extra space is immediately apparent once you step inside, which means you’ll have plenty of room to care for your flock. With nesting box access from outside the coop, gathering eggs has never been easier or more fun. Do bantam chickens lay eggs every day? Once a bantam chicken starts laying eggs, they will lay every other day for about four (4) to six (6) months, then they will stop producing while they shed their feathers (called molting). How many Bantams should I get?
{"url":"https://www.david-cook.org/how-big-of-a-chicken-coop-do-you-need-for-25-chickens/","timestamp":"2024-11-12T15:23:05Z","content_type":"text/html","content_length":"39956","record_id":"<urn:uuid:e781d27e-c91b-46d6-ba34-3e9a7b158b3e>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00231.warc.gz"}
Solutions for Weightless MohrCoulomb Materials When there is a frictional component of shear strength (i.e., ϕ> 0), but no body forces (γ=0), we have shown in Section 12.6 that for uniform surface loadings the slip lines or “characteristics” must be straight or log spirals. For wedges then, the combined failure mechanism will be similar to that for EPS material, but with the circular portion replaced by a logspiral fan as in Figure 12.15. We could now work our way along the slip surface from the known boundary point at C on BC to the unknown boundary point A on BA using the solution [Equation (12.20)] to Kötter’s equation for γ=0. Just as for the previous case for a heavy EPS with ϕ=0, this shooting technique would give a solution dependent on the extent of zone II (the angle ψ) which, in turn, is related to the angle of wall friction. Unfortunately this general development and the resulting expressions are complicated and cumbersome. For example, even if we assume a Let us instead work out the specific case of the classic punch or strip footing problem shown in Figure 12.16. If p[L] is the major principal stress and slip sur face for the active wedge (zone I) must be at But since D is also on the curved logspiral, the solution [Equation (12.20a)] to Kötter’s equation applies and The same procedure can now be used at point E, which is on the logspiral fan and wedge III. In this case which is the classic result derived by Prandtl* in 1920 for the punch problem for a weightless material. For foundations, q is the depth of burial, d, so q =γd and the formula is written As a final example, consider the case of a vertical retaining wall. If the wall is smooth, the normal stress on it is principal and the Coulomb solution, Equation (12.3) or (12.5), must be correct since zone II disappears (Figure 12.17) and the slip surface is straight. It is important to note that when the slip lines are straight it is not necessary to assume the material weightless since α is a constant and Kötter’s equation can be integrated directly giving the Coulomb solution. However, if the wall is rough ( δ> 0) we must assume γ= 0 to integrate along the logspiral portion of the slip surfaces for either the active or passive cases as shown in Figure 12.18. It will turn out that the effect of roughness is not very significant for the active case while it is very important for passive failure. Therefore, let us solve the passive case and leave the active case for a chapter problem. To simplify the situation further, let us assume that c = 0. The stresses on the wall are constant so that at any depth the Mohr’s Circle is as shown in Figure 12.18. Therefore where µ is the angle from the x axis to the major principal stress. But it can be shown from Mohr’s circle (Figure 12.18) that so that µ, which is the change in angle from D to E as we move along the logspiral fan, can be found in terms of any angle of wall friction. Since we now know the critical shear stress on the slip surface at D and E, we can proceed as before to apply Kötter’s equation to determine that Total forces on the wall N[p] and T[p] necessary to reach the passive limit state are then these stresses times the height of the wall.
{"url":"https://www.brainkart.com/article/Solutions-for-Weightless-MohrCoulomb-Materials_4861/","timestamp":"2024-11-14T10:54:27Z","content_type":"text/html","content_length":"40233","record_id":"<urn:uuid:00a08c00-c558-464f-b9d5-61576b09ceef>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00790.warc.gz"}
[Solved] If a+b+c=0 and ∣a∣=3,∣b∣=5,∣c∣=7, then the angle betwe... | Filo If and , then the angle between and is Not the question you're searching for? + Ask your question Let be the angle between and . Then, . Was this solution helpful? Found 5 tutors discussing this question Discuss this question LIVE for FREE 10 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Practice questions from Vectors and 3D Geometry for JEE Main and Advanced (Amit M Agarwal) View more Practice more questions from Vector Algebra Practice questions on similar concepts asked by Filo students View more Stuck on the question or explanation? Connect with our Mathematics tutors online and get step by step solution of this question. 231 students are taking LIVE classes Question Text If and , then the angle between and is Updated On Aug 3, 2023 Topic Vector Algebra Subject Mathematics Class Class 12 Answer Type Text solution:1 Video solution: 1 Upvotes 156 Avg. Video Duration 2 min
{"url":"https://askfilo.com/math-question-answers/if-mathbfamathbfbmathbfc0-and-mathbfa3mathbfb5mathbfc7-then-the-angle-between","timestamp":"2024-11-12T13:38:39Z","content_type":"text/html","content_length":"483598","record_id":"<urn:uuid:6e95e67d-ef25-44b5-b91b-b7bdf06fabee>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00164.warc.gz"}
Solving an LP Problem with Data in MPS Format Example 4.6 Solving an LP Problem with Data in MPS Format In this example, PROC INTPOINT is ultimately used to solve an LP. But prior to that, there is SAS code that is used to read a MPS format file and initialize an input SAS data set. MPS was an optimization package developed for IBM computers many years ago and the format by which data had to be supplied to that system became the industry standard for other optimization software packages, including those developed recently. The MPS format is described in Murtagh (1981). If you have an LP which has data in MPS format in a file /your-directory/your-filename.dat, then the following SAS code should be run: filename w '/your-directorys/your-filename.dat'; data raw; infile w lrecl=80 pad; input field1 $ 2-3 field2 $ 5-12 field3 $ 15-22 field4 25-36 field5 $ 40-47 field6 50-61; data lp; if _type_="FREE" then _type_="MIN"; if lag(_type_)="*HS" then _type_="RHS"; proc sort data=lp; by _col_; proc intpoint condata=lp sparsecondata rhsobs=rhs grouped=condata conout=solutn /* SAS data set for the optimal solution */ nnas=1700 ncoefs=4000 ncons=700 printlevel2=2 memrep; proc lp data=lp sparsedata endpause time=3600 maxit1=100000 maxit2=100000; show status; You will have to specify the appropriate path and file name in which your MPS format data resides. SASMPSXS is a SAS macro provided within SAS/OR software. The MPS format resembles the sparse format of the CONDATA= data set for PROC INTPOINT. The SAS macro SASMPSXS examines the MPS data and transfers it into a SAS data set while automatically taking into account how the MPS format differs slightly from PROC INTPOINT’s sparse format. The parameters NNAS=1700, NCOEFS=4000, and NCONS=700 indicate the approximate (overestimated) number of variables, coefficients and constraints this model has. You must change these to your problems dimensions. Knowing these, PROC INTPOINT is able to utilize memory better and read the data faster. These parameters are optional. The PROC SORT preceding PROC INTPOINT is not necessary, but sorting the SAS data set can speed up PROC INTPOINT when it reads the data. After the sort, data for each column is grouped together. GROUPED=condata can be specified. For small problems, presorting and specifying those additional options is not going to greatly influence PROC INTPOINT’s run time. However, when problems are large, presorting and specifying those additional options can be very worthwhile. If you generate the model yourself, you will be familiar enough with it to know what to specify for the RHSOBS= parameter. If the value of the SAS variable in the COLUMN list is equal to the character string specified as the RHSOBS= option, the data in that observation is interpreted as right-hand-side data as opposed to coefficient data. If you do not know what to specify for the RHSOBS = option, you should first run PROC LP and optionally set MAXIT1=1 and MAXIT2=1. PROC LP will output a Problem Summary that includes the line Rhs Variable rhs-charstr BYTES=20000000 is the size of working memory PROC INTPOINT is allowed. The options PRINTLEVEL2=2 and MEMREP indicate that you want to see an iteration log and messages about memory usage. Specifying these options is optional.
{"url":"http://support.sas.com/documentation/cdl/en/ormpug/63352/HTML/default/ormpug_intpoint_sect053.htm","timestamp":"2024-11-14T22:02:43Z","content_type":"application/xhtml+xml","content_length":"14022","record_id":"<urn:uuid:62066690-647e-414d-8a20-cf4a67eb8391>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00758.warc.gz"}
How to get timedelta in minutes using Python? You can use the timedelta class in the datetime module to represent a duration in minutes. To create a timedelta object representing a certain number of minutes, you can do: 1 from datetime import timedelta 3 minutes = 10 4 delta = timedelta(minutes=minutes) You can then use the total_seconds() method to get the total number of seconds in the timedelta object and then divide by 60 to get the number of minutes. 1 minutes = delta.total_seconds() / 60 Alternatively, you can directly use the seconds attribute of the timedelta object to get the total number of seconds and divide by 60 to get the number of minutes. 1 minutes = delta.seconds / 60
{"url":"https://devhubby.com/thread/how-to-get-timedelta-in-minutes-using-python","timestamp":"2024-11-11T14:03:05Z","content_type":"text/html","content_length":"130661","record_id":"<urn:uuid:fc7147d0-aa98-4582-860f-6506eb2d4294>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00641.warc.gz"}
A Gadget for 3-Colorings Following up on Bill's post earlier this week on counting the number of 3-colorings, Steven Noble emailed us with some updated information. The first proof that counting 3-colorings is #P-complete is in a 1986 paper by Nati Linial. That proof uses a Turing reduction using polynomials based on posets. Steven points to a 1994 thesis of James Annan under the direction of Dominic Welsh at Oxford that gives the gadget construction that I so tried and failed to do in Bill's post. Think of color 0 as false and color 1 as true and use this gadget in place of the OR-gadgets in the regular NP-complete proof of 3-coloring. I checked all eight values of a, b and c and the gadget works as promised. Steven also noted that counting 2-colorings is easy, because for each connected component, there are either 0 or 2 colorings.
{"url":"https://blog.computationalcomplexity.org/2022/06/a-gadget-for-3-colorings.html?m=1","timestamp":"2024-11-05T00:34:21Z","content_type":"application/xhtml+xml","content_length":"52235","record_id":"<urn:uuid:7dc064d3-be2e-4507-aff8-280f882678bc>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00221.warc.gz"}
EE 396: Lecture 13 - P.PDFKUL.COM EE 396: Lecture 13 Ganesh Sundaramoorthi March 29, 2011 Computing Distance Functions and Minimal Paths In the last lecture, we gave one possible way for implicitely representing a simple closed curve - via the signed distance representation. At first sight it looks quite computationally expensive to compute, but now we derive a fast method to compute the distance function, and indeed more general distance functions. The clever algorithm that we present is in [1]. Given a closed compact and smooth surface S ⊂ Ω ⊂ Rn (n ≥ 2) (in the simplest case the surface will be a curve c ⊂ R2 ), we consider the problem of computing for each x ∈ Ω the weighted distance to the surface: Z 1 dS (x) = inf φ(γ(t))|γ 0 (t)| dt = inf Lφ (γ) γ∈Γx where φ : Ω → R+ is called the metric (or local cost) and Γx = {γ : [0, 1] → Ω : γ(0) = x, γ(1) ∈ S}. Notice that dsγ = |γ 0 (t)| dt (that is the arclength element of the path γ), and thus Z 1 φ(γ (t))|γ 0 (t)| dt is a weighted length of γ (the weight φ being spatially dependent). Thus, we see that dS (x) is the weighted length of the smallest weighted length path starting from x and ending at any point on S. Notice that if φ = 1, then the integral is just the length of γ and dS (x) = inf L(γ) = inf |x − y| γ∈Γx since straight lines are the shortest length paths, and the straight line distance from x to a point y ∈ S is just |x − y|. So in the case φ = 1, dS reduces to the distance function that we have defined in the previous lecture. An example use of the general problem for non-constant φ is for object detection : one finds the path that corresponds to the boundary of an object. A simple way of doing this is to choose φ(x) = 1 1 + |∇I(x)|2 where I : Ω → R is the image. Note the boundary of an object many times has large image gradients, and thus the minimal path would align with the object boundary; the minimal path algorithm has been used for detecting roads in aerial images and vessel detection in medical images [1]. We shall now derive an algorithm whereby we can compute dS (x) for each x ∈ Ω quickly and thereby also compute the minimal path (wrt the length defined by Lφ ) from each x ∈ Ω to S, all in one shot! 1 Euler-Lagrange Equations By now we know that in order to find possible minima of Lφ so as to compute dS , we locate critical paths, i.e., we look for paths such that ∇Lφ = 0, which is simply those paths that satisfy the Euler-Lagrange equations. Thus, we compute the Euler-Lagrange equations. To do so, we compute the directional derivative, but first we note the permissible class of perturbations of a path γ ∈ Γx . Note that a permissible perturbation h : [0, 1] → Rn is such that γ + εh ∈ Γx for small ε > 0. This implies that γ(0) + εh(0) = x ⇒ h(0) = 0 γ(1) + εh(1) ∈ S ⇒ h(1) · NS (γ(1)) = 0 where NS (γ(1)) is the normal to S at the point γ(1). Note the last relation simply says that the perturbation h(1) can move γ(1) only in a tangent direction to the surface S (otherwise if we were to move in the normal direction, then γ(1) + εh(1) ∈ / S). Thus, VΓx = {h : [0, 1] → Rn : h(0) = 0, h(1) · NS (γ(1)) = 0} are the permissible perturbations of Γx . Now, d Lφ (γ + εh) dLφ (γ) · h = dε ε=0 Z 1 d 0 0 = φ(γ(t) + εh(t))|γ (t) + εh (t)| dt 0 dε ε=0 Z 1 γ 0 (t) 0 0 = ∇φ(γ(t)) · h(t)|γ (t)| + φ(γ(t)) 0 · h (t) dt |γ (t)| 0 Z 1 1 d γ 0 (t) = ∇φ(γ(t)) − 0 φ(γ(t)) 0 |γ 0 (t)| dt |γ (t)| dt |γ (t)| 0 t=1 γ 0 (t) + φ(γ(t)) 0 · h(t) |γ (t)| t=0 (9) (10) (11) (12) (13) where in the last expression, we have integrated by parts. We first note that ds = |γ 0 (t)| dt, d 1 d γ 0 (t) = 0 , γs (t) = 0 ds |γ (t)| dt |γ (t)| where s denotes the arclength parameter of γ. Also, note that the boundary term vanishes (obviously, at t = 0 it vanishes since h(0) = 0). We argue that for a minimal path γ, that γs (1) · h(1) = γ 0 (1) · h(1) = 0; |γ 0 (1)| indeed, γs (1) should be normal to the surface S; otherwise if there were a component of γs (1) tangent to the surface, then we could end the path at y + γs (1) ∈ S for small and note there is some 0 < t0 < 1 where γ(t0 ) = y + γs (1). We could then define a new path γ˜ as γ˜ (t) = γ(t0 t), t ∈ [0, 1] and this new path would have smaller length (wrt Lφ ) than γ, contradicting that γ is minimal. Therefore, γs (1) must be normal to S, and note that by our permissible perturbations, we have that h(1) is tangent to S. Hence, we see that γs (1) · h(1) = 0. Now, Z d dLφ (γ) · h = ∇φ(γ(s)) − (φ(γ(s))γs (s)) · h(s) ds (17) ds γ where we have made a change of variable to the arclength variable, s. We therefore see that the EulerLagrange equations are d ∇Lφ (γ) = ∇φ(γ(s)) − (φ(γ(s))γs (s)) = 0. (18) ds Eikonal Equation and Relation to E-L of Lφ Now in previous lectures, we would guess an initial path and perform gradient descent of Lφ , and then converge to a local minimum. However, there is a much smarter method to obtain the global minimum without any initial guess of the path. To see this suppose that we solve the PDE ( |∇U (x)| = φ(x) x ∈ Ω ; (19) U (x) = 0 x∈S this equation is known as the eikonal equation, and we shall see fast methods to compute its solution shortly, but for now assume that we have solved the PDE and have the solution U . Define a path by the differential equation below: ( γ 0 (t) = ∇U (γ(t)) . (20) γ (0) = x Then γs (t) = γ 0 (t) ∇U (γ(t)) ∇U (γ(t)) = = , ⇒ φ(γ(t))γs (t) = ∇U (γ(t)). |γ 0 (t)| |∇U (γ(t))| φ(γ(t)) Also, ∇φ(γ(t)) = ∇(|∇U (x))|x=γ(t) = HU (γ(t)) ∇U (γ(t)) |∇U (γ(t))| where HU is the Hessian of U . Now, ∇φ(γ(s)) − d ∇U (γ(t)) d (φ(γ(s))γs (s)) = HU (γ(t)) − (∇U (γ(t))) = ds |∇U (γ(t))| ds ∇U (γ(t)) HU (γ(t)) − HU (γ(t))γs (t) = 0. (23) |∇U (γ(t))| Thus, we see that γ that solves the differential equation (20) solves the Euler-Lagrange equations for Lφ . We compute now the weighted length of the path γ that solves (20), but we first note that ∇U (γ(t)) d U (γ(t)) = ∇U (γ(t)) · γs (t) = ∇U (γ(t)) · = |∇U (γ(t))| ds |∇U (γ(t))| and thus, Z Lφ (γ) = Z |∇U (γ(t))| ds = φ(γ(t))|γ (t)| dt = 0 d U (γ(t)) ds = U (γ(1)) − U (γ(0)). (25) ds If we choose x ∈ S then γ(0) = x ∈ S and then Lφ (γ) = U (γ(1)), which says that U (y) is the length of the path that solves the E-L equation and starts at some point in S and ends at y. Global Minimum of Lφ We show now that in fact γ defined by (20) is the global minimum path from x and ending at some point in in S. To see this, we define the following curve evolution (surface evolution for higher dimensions ; the argument works there too, but we show it for curves to keep the notation simple) : ( 1 ∂t c(t, p) = φ(c(t,p)) N (t, p) t > 0 ; (27) c(0, ·) = S t=0 that is, the initial curve is the set S (which for the sake of simiplicity of notation, assume is a curve), and then we deform the curve in the outward normal direction at a speed proportional to 1/φ. Note that φ ≥ 0, and so the curve is moving at outward from S. We note that this is the same form of equation that we have seen in the last lectures on Region Competition. We now show that c(t, ·) corresponds to the t level set of U , that is, c(t, ·) = {x ∈ Rn : U (x) = t} and that U (x) is in fact the length of the global minimal path from x to S. To show this, we apply an inductive argument. That is, suppose that c(t, ·) is the t level set of U and that U (c(t, p)) is the length of the global minimum path from c(t, p) to S, then we show that c(t + ∆t, ·) for ∆t > 0 small is the t + ∆t level set of U , and U (c(t + ∆t, p)) is the global min. path length from c(t + ∆t, p) to S. Now for ∆t > 0 small, consider the global minimum path from c(t + ∆t, p) to the curve c(t, ·) = {x ∈ n R : U (x) = t}. Note the minimal paths emanating from c(t, ·) to outside c(t·) will be normal to c(t, ·) (this is using the same reasoning above in the previous section when we concluded that h(1)·NS = 0). Also, if a point x is in a (small) neighborhood outside of c(t, ·), then the minimal path from x to c(t, ·) will be a straight line that is normal to c(t, ·) and touches x. This is because minimal paths are locally approximated by straight lines (φ is nearly constant in a neighborhood if it is continuous). Thus since c(t + ∆t, p) is in neighborhood outside c(t, ·), the minimal path from c(t + ∆t, p) to c(t, ·) is simply the line segment from c(t, p) to c(t + ∆t, p), and then c(t + ∆t, p) − c(t, p) = ∆t 1 1 N (t, p), |c(t + ∆t, p) − c(t, p)| = ∆t , φ(c(t, p)) φ(c(t, p)) the later is the Euclidean length from c(t, p) to c(t + ∆t, p), and thus the minimal path length from c(t, ·) to c(t + ∆t, p) is Lφ = φ(c(t, p)) ds = φ(c(t, p))|c(t + ∆t, p) − c(t, p)| = ∆t. (29) Therefore, the global minimal path from c(t + ∆t, p) to S will be the global minimal path from c(t + ∆t, ·) to c(t, ·) concatenated with the minimal path from c(t, ·) to S. But by the inductive hypothesis, we know that the global minimum path length from c(t, p) to S is U (c(t, p)), and so 1 N (t, p) = U (c(t, p)) + ∆t = t + ∆t, (30) U (c(t + ∆t, p)) = U c(t, p) + ∆t φ(c(t, p)) that is c(t + ∆t, ·) is the t + ∆t level set of U . 4 The preceding argument is also known as the Principle of Dynamic Programming. To summarize, we have just established that U (x) is the global minimal path length from x to S, and that the level set {U = t} is c(t, ·). Moreover, the global minimal path is computed using (20). Fast Marching Algorithm The equations (27) are the basis for a fast algorithm for computing U known as Fast Marching [2] (see also [3]). Indeed, the idea of the algorithm is simply to evolve the curve c (propagating the level sets of U outward) and simultaneously recording the arrival times t at the each of the points of c(t, ·) using the eikonal equation (19). Note that information (i.e., the initial value U = 0 on S) is propagating outward (since φ > 0) and thus, we must use an upwinding difference scheme to discretize (19) as we have done in the previous lecture for the region-based term of region competition. The Fast Marching algorithm then works by updating the labels of points, where the labels are FAR (points that have not been touched by the front, c(t, ·)), ALIVE (points that have been already passed by the front), and TRIAL (points that the front is currently visiting). The TRIAL points are where the algorithm solves for the value of U using only the information of U at the currently ALIVE or TRIAL points so as to obey the upwinding scheme. The algorithm works on a discrete grid and in two dimensions (generalization to higher dimensions is trivial) as : Initialize as ( ( +∞ ij ∈ /S FAR ij ∈ /S Uij = , lij = (31) 0 ij ∈ S TRIAL ij ∈ S where lij denote the label of the pixel ij. Then loop the following until all grid points have been marked ALIVE : 1. im , jm = arg minij∈TRIAL Uij 2. lim jm = ALIVE 3. for each of the neighbor kl of im jm • If lkl = FAR, then lkl = TRIAL • If lkl = ALIVE, then solve (max(u − Ui−1,j , u − Ui+1,j , 0))2 + (max(u − Ui,j−1 , u − Ui,j+1 , 0))2 = φij for u, and set Ukl = min(u, Ukl ). The computation of the minimum above can be computed quickly using a heap structure to always keep the TRIAL points ordered. The complexity of this algorithm is then O(N log N ) where N is the number of grid points (log N comes from searching the heap structure). Fast Sweeping Algorithm Another algorithm for computing the solution of the eikonal equation is fast sweeping [5] (see also [4]), which does not require the book-keeping of a heap and labels as the Fast Marching Method, and indeed converges after a few iterations each costing O(N ). The algorithm initializes ( +∞ ij ∈ /S Uij = (33) 0 ij ∈ S 5 and then iterates the following sweeps 1. for j = 1, . . . , Ny , for i = 1, . . . , Nx , solve (32) and set Uij = min (u, Uij ) 2. for j = 1, . . . , Ny , for i = Nx , . . . , 1 solve (32) and set Uij = min (u, Uij ) 3. for j = Ny , . . . , 1, for i = 1, . . . , Nx , solve (32) and set Uij = min (u, Uij ) 4. for i = Nx , . . . , 1, for j = Nx , . . . , 1, solve (32) and set Uij = min (u, Uij ) until convergence. The algorithm converges rapidly and in fact within 2-3 iterations, scheme converges. References [1] L.D. Cohen and R. Kimmel. Global minimum for active contour models: A minimal path approach. International Journal of Computer Vision, 24(1):57–78, 1997. [2] J.A. Sethian. A fast marching level set method for monotonically advancing fronts. Proceedings of the National Academy of Sciences of the United States of America, 93(4):1591, 1996. [3] J.N. Tsitsiklis. Efficient algorithms for globally optimal trajectories. Automatic Control, IEEE Transactions on, 40(9):1528–1538, 1995. [4] A.J. Yezzi Jr and J.L. Prince. An Eulerian PDE approach for computing tissue thickness. Medical Imaging, IEEE Transactions on, 22(10):1332–1339, 2003. [5] H. Zhao. A fast sweeping method for eikonal equations. Mathematics of computation, 74(250):603–628, 2005.
{"url":"https://p.pdfkul.com/ee-396-lecture-13_5a1ead2a1723dd03ebf016df.html","timestamp":"2024-11-14T20:26:04Z","content_type":"text/html","content_length":"65165","record_id":"<urn:uuid:3c8c081a-f003-4f58-8a4f-69e4c43b7553>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00625.warc.gz"}
All Signs Point to the Discriminant All Signs Point to the Discriminant Have you ever owned one of those Magic 8 Balls? They look like comically oversized pool balls, but have a flat window built into them, so that you can see what's inside a 20-sided die floating in disgusting opaque blue goo. Supposedly, the billiard ball has prognostic powers; all you have to do is ask it a question, give it a shake, and slowly, mystically, like a petroleum-covered seal emerging from an oil spill, the die will rise to the little window and reveal the answer to your question. The quadratic equation contains a Magic 8 Ball of sorts. The expression b^2 - 4ac from beneath the radical sign is called the discriminant, and it can actually determine for you how many solutions a given quadratic equation has, if you don't feel like actually calculating them. Considering that an unfactorable quadratic equation requires a lot of work to solve (tons of arithmetic abounds in the quadratic formula, and a whole bunch of steps are required in the completing the square method), it's often useful to gaze into the mystic beyond to make sure the equation even has any real number solutions before you spend any time actually trying to find them. Talk the Talk The discriminant is the expression b^2 - 4ac, which is defined for any quadratic equation ax^2 + bx + c = 0. Based upon the sign of the expression, you can determine how many real number solutions the quadratic equation has. Here's how the discriminant works. Given a quadratic equation ax^2 + bx + c = 0, plug the coefficients into the expression b^2 - 4ac to see what results: • If you get a positive number, the quadratic will have two unique solutions. • If you get 0, the quadratic will have exactly one solution, a double root. • If you get a negative number, the quadratic will have no real solutions, just two imaginary ones. (In other words, solutions will contain the i you learned about in Wrestling with Radicals.) The discriminant isn't magic. It just shows how important that radical is in the quadratic formula. If its radicand is 0, for example, then you'll get a single solution. If, however, b^2 - 4ac is negative, then you'll have a negative inside a square root sign in the quadratic formula, meaning only imaginary solutions. Example 4: Without calculating them, determine how many real solutions the equation 3x^2 - 2x = -1 has. Solution: Set the quadratic equation equal to 0 by adding 1 to both sides. You've Got Problems Problem 4: Without calculating them, determine how many real solutions the equation 25x^2 - 40x + 16 = 0 has. Set a = 3, b = -2, and c = 1, and evaluate the discriminant. • b^2 - 4ac • =(-2)^2 - 4(3)(1) • = 4 - 12 • = -8 Because the discriminant is negative, the quadratic equation has no real number solutions, only two imaginary ones. Excerpted from The Complete Idiot's Guide to Algebra © 2004 by W. Michael Kelley. All rights reserved including the right of reproduction in whole or in part in any form. Used by arrangement with Alpha Books, a member of Penguin Group (USA) Inc. You can purchase this book at Amazon.com and Barnes & Noble. Here are the facts and trivia that people are buzzing about.
{"url":"https://www.infoplease.com/math-science/mathematics/algebra/all-signs-point-to-the-discriminant","timestamp":"2024-11-10T22:15:28Z","content_type":"text/html","content_length":"96105","record_id":"<urn:uuid:4f389be1-c992-4ae3-a2a6-9664e60304cd>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00888.warc.gz"}
Section author: Danielle J. Navarro and David R. Foxcroft In this chapter I’ve covered two main topics. The first half of the chapter talks about sampling theory, and the second half talks about how we can use sampling theory to construct estimates of the population parameters. The section breakdown looks like this: As always, there’s a lot of topics related to sampling and estimation that aren’t covered in this chapter, but for an introductory psychology class this is fairly comprehensive I think. For most applied researchers you won’t need much more theory than this. One big question that I haven’t touched on in this chapter is what you do when you don’t have a simple random sample. There is a lot of statistical theory you can draw on to handle this situation, but it’s well beyond the scope of this book.
{"url":"https://lsj.readthedocs.io/en/latest/Ch08/Ch08_Estimation_6.html","timestamp":"2024-11-13T04:51:53Z","content_type":"text/html","content_length":"12877","record_id":"<urn:uuid:6fb59ecf-8a54-4e28-a5f6-9001b8884ea3>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00503.warc.gz"}
tech-multimedia by thread Thread index Last updated: Wed May 06 07:17:55 2015 Timezone is UTC • Problems (panics) with using my uvideo... Help ple, Martin S. Weber □ Re: Problems (panics) with using my uvideo... Help, Jeremy Morse ☆ Re: Problems (panics) with using my uvideo... Help, Martin S. Weber ☆ Re: Problems (panics) with using my uvideo... Help, Martin S. Weber ○ Re: Problems (panics) with using my uvideo... Help, Jeremy Morse ○ Re: Problems (panics) with using my uvideo... Help, Martin S. Weber ○ Re: Problems (panics) with using my uvideo... Help, Martin S. Weber ○ Re: Problems (panics) with using my uvideo... Help, Jeremy Morse ○ Re: Problems (panics) with using my uvideo... Help, Martin S. Weber ○ Re: Problems (panics) with using my uvideo... Help, Martin S. Weber ○ Re: Problems (panics) with using my uvideo... Help, Jeremy Morse ○ Re: Problems (panics) with using my uvideo... Help, Martin S. Weber ○ Re: Problems (panics) with using my uvideo... Help, Jared D. McNeill ○ Re: Problems (panics) with using my uvideo... Help, Martin S. Weber ○ Re: Problems (panics) with using my uvideo... Help, Martin S. Weber ○ Re: Problems (panics) with using my uvideo... Help, Jeremy Morse ○ Re: Problems (panics) with using my uvideo... Help, Martin S. Weber ○ Re: Problems (panics) with using my uvideo... Help, Jeremy Morse ○ Re: Problems (panics) with using my uvideo... Help, Patrick Mahoney ○ Re: Problems (panics) with using my uvideo... Help, Martin S. Weber ○ Re: Problems (panics) with using my uvideo... Help, Jeremy Morse ○ Re: Problems (panics) with using my uvideo... Help, Jared D. McNeill ○ Re: Problems (panics) with using my uvideo... Help, Martin S. Weber ○ Re: Problems (panics) with using my uvideo... Help, Jeremy Morse ○ Re: Problems (panics) with using my uvideo... Help, Martin S. Weber • a common DVB infrastructure, Manu Abraham • test mail, Manu Abraham • Welcome tech-multimedia, S.P.Zeidler Mail converted by MHonArc
{"url":"https://mail-index.netbsd.org/tech-multimedia/thread1.html","timestamp":"2024-11-11T15:13:50Z","content_type":"text/html","content_length":"7216","record_id":"<urn:uuid:133e8aa6-a863-4524-9bea-0570ed57ed1f>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00060.warc.gz"}
Hypothesis Testing | Studywell.com Hypothesis Testing What is Hypothesis Testing? Hypothesis Testing in statistics is exactly that โ testing a hypothesis, where a hypothesis is a theory about a given situation. For example, you might have a coin that you suspect is biased asย coin tossingย seems to be favouring heads over tails. You might like to test the hypothesis that the coin is biased in favour of heads โ this would be hypothesis testing. See this example and some other Examples of Hypothesis Testing. So, what is a hypothesis test? First of all, consider a given statement such as โ my coin is fairโ which is normally accepted/expected. This is known as the null hypothesis. However, a situation arises which contradicts it which leads us to present an alternative hypothesis. Part of the hypothesis test is to set up an experiment to find evidence to accept or reject the null hypothesis. In the most common approach, the probability of the result (or worse) of the experiment is calculated. If the probability is low enough (below the significance level), one can conclude that the result is unlikely to be due to chance. In this case, the alternative hypothesis is more likely and the null hypothesis should be rejected. In hypothesis testing we only ever ACCEPT or REJECT the null In order to test your hypothesis mathematically, you must first be very clear about what you are testing. The hypothesis test should be set up in a formal fashion. Null and Alternative Hypotheses The first step is to write down the statement you wish to challenge and provide its associated alternative. This involves stating a null hypothesis and an alternative hypothesis. We conventionally call these and . This is because writing null hypothesis and alternative hypothesis is tedious. The following shows how we might write down the null and alternate hypotheses in words. We, typically, however, write them as equations: : This is the commonly accepted theory, the one that is being challenged. It is the opposite of the alternative hypothesis. For example, the coin mentioned above FAIR. : The alternative hypothesis is the one being presented. This is the theory that is being tested using probabilities. For example, the coin mentioned above is BIASED. However, the null and alternative hypotheses should be written in terms of a test statistic. The null hypothesis will either be accepted or rejected depending on the probability of an experiment Test Statistic In hypothesis testing, the test statistic is the statistic that is being assessed. In the example above, the test statistic is the probability p of tossing heads. The null and alternative hypothesis should both be written in terms of this statistic: Significance Level In order to test a hypothesis, a significance level must be specified. It is the probability at which the result of the experiment occurring by chance becomes too low. For example, suppose an experiment is conducted with the coin mentioned above. The outcome of the experiment (or worse) occurs with a probability of 0.04 assuming that the coin is fair. This is below the 5% significance level and so the null hypothesis should be rejected. Note that for discrete probability distributions, it is unlikely to get an experiment outcome that is exactly the same as the significance level. We usually go for the outcome that gets us the closest. 1 and 2 Tail Tests If the alternative hypothesis suggests bias in a given direction, i.e. the probability of tossing heads is greater than 0.5, then the test is one-tailed. On the contrary, if the direction is not specified the test is two-tailed. For example, the following hypotheses could be stated if the coin is biased either way: For two-tailed test, since we are challenging the null hypothesis test either way, we must split the significance level up between the two experiment tails. Probabilities at both extremes of the experiment must be calculated and assessed. See some Examples of Hypothesis Testing. Critical Value and Critical Region/Acceptance Region Given the significance level, the critical/acceptance regions are the sets of values that lead to rejection/acceptance of the null hypothesis. The values in the critical/acceptance regions correspond to outcomes of the experiment. The critical region is the set of experiment outcomes that lead to rejection of the null hypothesis. Likewise, the acceptance region is the set of experiment outcomes that lead to acceptance of the null hypothesis. The critical values are the boundary values between critical and acceptance regions. p-value is essentially a probability value. In the approach mentioned above, the null hypothesis is rejected if the probability of an outcome of an experiment is below the significance level. With this approach, the significance level is chosen first. Alternatively, one can define the critical region given a certain significance level and see if the outcome of the experiment falls inside or outside of this critical region. The main advantage of the original approach is seeing at what levels the outcome is significant. This, however, can introduce bias in the choosing of the significant level so as to ensure rejection of the null hypothesis. See someย Examples of Hypothesis Testing. AS Statistics Hypothesis Testing A2 Statistics Hypothesis Testing
{"url":"https://studywell.com/statistical-hypothesis-testing/hypothesis-testing/","timestamp":"2024-11-06T04:04:37Z","content_type":"text/html","content_length":"218034","record_id":"<urn:uuid:6b29b703-c89d-49ec-80fe-792e9dc61d79>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00127.warc.gz"}
Equations Formulas Design Calculator Online Web Apps, Rich Internet Application, Technical Tools, Specifications, How to Guides, Training, Applications, Examples, Tutorials, Reviews, Answers, Test Review Resources, Analysis, Homework Solutions, Worksheets, Help, Data and Information for Engineers, Technicians, Teachers, Tutors, Researchers, K-12 Education, College and High School Students, Science Fair Projects and Scientists By Jimmy Raymond Contact: aj@ajdesigner.com Privacy Policy, Disclaimer and Terms Copyright 2002-2015
{"url":"https://www.ajdesigner.com/phppipesoil/buoyancy_factor_equation_water_height.php","timestamp":"2024-11-04T07:13:23Z","content_type":"text/html","content_length":"21524","record_id":"<urn:uuid:85cfcba9-fc51-4cea-bdd8-b51ca208ad3e>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00424.warc.gz"}
Calculate growth rate in r I've been trying to calculate the CAGR between the data for each row, but I can't seem to find a function to do precisely that in "R". I found a package with a growth.rate function for time series, but it doesn't take my data in the order I need (it uses bayes method for velocity of growth, which I don´t need). I'm currently trying to work out the relative growth rate of individuals using the equation RGR = (lnW2-lnW1)/(t2-t1) so log of weight at time 2 minus log weight at time 1 divided by time 2-time 1. Most individuals were measured 4 times so I'd use the 1st and 4th measure, however not all were so some were measured 3 times and others 2. Nt = N0ert. Here, the population at time t, Nt, is a function of time and can be calculated using N0, the initial population size; r, the rate of increase; and t, time. So we give this useful number the name "pi", to simplify our calculations and The beginning amount was P = 250, the growth rate is r = 0.046, and the time t is By the Rule of 70, we know that the doubling time (dt) is equal to 70 divided by the growth rate (r). That means our formula would look like this: dt = 70 / r; dt = 70 / 25 Jun 2018 online precalculus course, exponential functions, relative growth rate. and also explores the relative growth rate. the rate at which the population is changing P′(t)=is proportional to r⋅the Based on the calculation above, about how many people do you expect to have after one year? 24 Mar 2015 Note: growth rate (r) must be entered as a whole number and not a decimal. For example 5% must be entered as 5 instead of 0.05. dt = 70/r. For and, for each, show how to calculate function-derived growth rates, which allow mixed-effects models, nonlinear regression, relative growth rate, R language. 6 Jun 2019 When it comes to compounding annual growth rates, there's more than meets the eye. Discover how to calculate CAGR while avoiding Note added 6/13: Pat Gibney observed that the calculated growth rate varies with This function can be called in R to analyze OD600 data from a plate reader. Calculate the annual growth rate based on model_2 by exponentiating the effect size, subtracting 1 from it, then multiplying the result by 100 for easy r – the company's cost of equity; g – the dividend growth rate. How to Calculate the Dividend Growth Rate. The simplest way to calculate the DGR is to find the On a year-over-year basis, these growth rates are different, but we can use the formula below to find a single growth rate for the whole time period. CAGR requires 20 Jul 2017 Keywords: population growth, nonlinear models, differential equation The R package growthrates aims to streamline growth rate estimation In this equation, d N / d T dN/dT dN/dTd, N, slash, d, T is the growth rate of the population in a given instant, N N NN is population size, T T TT is time, and r r rr is Of the parameters typically calculated, the most important is relative growth rate ( RGR), defined as the parameter r in the equation: An external file that holds a 30 Jul 2018 The logistic equation describes the population size Nt at time t using: To access a single metric (for example the growth rate r) gc_fit$vals$r. A collection of methods to determine growth rates from experimental data, Inverse Modelling, Sensitivity and Monte Carlo Analysis in R Using Package FME . I'm currently trying to work out the relative growth rate of individuals using the equation RGR = (lnW2-lnW1)/(t2-t1) so log of weight at time 2 minus log weight at time 1 divided by time 2-time 1. Most individuals were measured 4 times so I'd use the 1st and 4th measure, however not all were so some were measured 3 times and others 2. I've been trying to calculate the CAGR between the data for each row, but I can't seem to find a function to do precisely that in "R". I found a package with a growth.rate function for time series, but it doesn't take my data in the order I need (it uses bayes method for velocity of growth, which I don´t need). I'm currently trying to work out the relative growth rate of individuals using the equation RGR = (lnW2-lnW1)/(t2-t1) so log of weight at time 2 minus log weight at time 1 divided by time 2-time 1. Most individuals were measured 4 times so I'd use the 1st and 4th measure, however not all were so some were measured 3 times and others 2. I've been trying to calculate the CAGR between the data for each row, but I can't seem to find a function to do precisely that in "R". I found a package with a growth.rate function for time series, but it doesn't take my data in the order I need (it uses bayes method for velocity of growth, which I don´t need). Package ‘growthrates’ December 18, 2019 Encoding UTF-8 Type Package Title Estimate Growth Rates from Experimental Data Version 0.8.1 Date 2019-12-17 LazyData yes Maintainer Thomas Petzoldt Description A collection of methods to determine growth rates from experimental data, in particular from batch Calculating Average Annual (Compound) Growth Rates. Another common method of calculating rates of change is the Average Annual or Compound Growth Rate (AAGR). AAGR works the same way that a typical savings account works. Interest is compounded for some period (usually daily or monthly) at a given rate. Of the parameters typically calculated, the most important is relative growth rate ( RGR), defined as the parameter r in the equation: An external file that holds a 17 Dec 2019 This package aims to streamline estimation of growth rates from direct or indirect It describes growth of three different strains of bacteria (D = Donor, R Package growthrates can determine growth parameters from single Note added 6/13: Pat Gibney observed that the calculated growth rate varies with This function can be called in R to analyze OD600 data from a plate reader. number of lags to use in calculating the growth rate as outlined in the details below. simple. simple growth rates if TRUE , compound growth rates if FALSE. start. Using R base function ( ave ) > dfdf$Growth <- with(df, ave(Value a standard time series toolkit. Thus, calculating growth rates is as simple as: 17 Dec 2019 This package aims to streamline estimation of growth rates from direct or indirect It describes growth of three different strains of bacteria (D = Donor, R Package growthrates can determine growth parameters from single Note added 6/13: Pat Gibney observed that the calculated growth rate varies with This function can be called in R to analyze OD600 data from a plate reader. 18 Dec 2019 Description A collection of methods to determine growth rates from ting Biological Growth Curves with R. Journal of Statistical Software, 33(7)
{"url":"https://tradingkwvkbo.netlify.app/reik74367nap/calculate-growth-rate-in-r-vy.html","timestamp":"2024-11-02T21:41:58Z","content_type":"text/html","content_length":"32100","record_id":"<urn:uuid:d62a3e35-5eaa-48b5-816b-75be36bf96cd>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00007.warc.gz"}
Abstract colorization of 3D shapes with CPPNs This post is a continuation of our previous work on CPPNs. Please read there about the basics of CPPNs and the notation we use here. The ideas, code and experiments described in this post enable us to render abstract shapes such as the following one. The basic idea is to assign an RGB color value to each point on the surface of a 3D mesh with a CPPN. Recall that in our last post we denoted the input as $\mathbf{h}_0$ (this vector, along with the parameters/structure of the CPPN, determines the color for a 3D point). Thus, in the case of 3D space: \[\mathbf{h}_0= \begin{pmatrix} x\\ y\\ z \end{pmatrix}\] It’s that simple: for each point on the surface, run its coordinates through a CPPN and obtain the RGB color intensities. Next, we use a 3D renderer to render this colored model onto the screen. The above video shows the simplest shape colored in this way: a sphere. The result nevertheless looks interesting. Of course, it is possible to apply the described colorization process to other shapes as As usual, we provide more technical details and code. Rendering chromatic orbs with OpenGL We use OpenGL in our experiments even though it is starting to become obsolete in favor of modern 3D APIs, such as Vulkan and Metal. The main reason are the stable Python bindings (which makes it convenient for fast prototyping) and an excellent book by Nicolas P. Rougier. Specifically, we modify the section “5.4.1 Colored cube” for our purposes. Instead of a cube, we render a sphere. Since OpnGL works with polygons, we first have to compute the vertices/triangles that approximate a sphere. A simple way to do this is by recalling the link between the spherical coordinate system and the Cartesian one. A point $(x, y, z)$ also has the following representation: \[\begin{matrix} x=&r\cos\theta\sin\phi\\ y=&r\sin\theta\sin\phi\\ z=&r\cos\phi \end{matrix}\] for some $r\in[0, \infty)$, $\theta\in[0, 2\pi)$ and $\phi\in[0, \pi]$ (the radial, azimuthal and polar component, respectively). Obtaining a set of vertices on a sphere now consists of setting a radius $r$ to some constant and uniformly sampling $\theta$ and $\phi$. The triangles are generated by connecting neigbouring triplets of vertices. A more thorough discussion of the whole process can be found here. We assign a color only to each vertex on the model due to computational and memory reasons. Colors of other points are interpolated by OpenGL. This significantly simplifies the whole pipeline. However, note that potentially large number of vertices (and, consequentially, triangles) are required to obtain a feel of a smooth surface. The program is available at the following link: sferogen.py. To run it, you need to install glumpy (a simplified OpenGL interface for Python). Also, see this post that briefly explains how a sequence of frames is stitched into a video via FFmpeg. Some example renderings of the program are shown in the following images. Note that the program is quite inefficient. Its worst part is the process of generating vertices on the sphere. If the reader delves in some non-superficial analysis of the code, he/she might notice that the Cartesian coordiantes of each vertex are computed multiple times (4 to be exact). This issue could be completely solved by restructuring the code in a smarter way. Also, as a general rule in Python, for loops should be avoided for any kind of serious computation: one should use numpy instead. Both of the mentioned flaws are simple to rectify. However, we do not bother ourselves with this since real-time processing is not required in our case: we generate our images/videos offline. Future work It would be interesting to modify the whole procedure to include a time-varying colorization of the shape. This could be achieved in a similar manner as in our last post: we can simply add a time-varying input to the CPPN. However, the required modifications to the program linked above are significant and we leave it for a potential future post.
{"url":"https://nenadmarkus.com/p/cppns-on-3d-surfaces/","timestamp":"2024-11-10T01:06:08Z","content_type":"text/html","content_length":"9787","record_id":"<urn:uuid:4a05f35a-f293-4694-8b0a-18df626939c3>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00880.warc.gz"}
Rational Decisions - The CEO Library We hope you love the books people recommend! Just so you know, The CEO Library may collect a share of sales or other compensation from the links on this page. This book has 1 recommendation Nassim Nicholas Taleb (Flaneur) This is a must read as it presents a comprehensive set of the principles and axioms behind neo-classical economics. Binmore is a mathematician, hence everything is mapped properly, clearly, and I spent several days in a seminar with Binmore and was surprised to discover, from his arguments, that much of the criticism against the foundations of decision theory are strawman. For the theory doesn't say what people think it says. It may have some problems (such as knowledge of probability and understanding of future payoffs) but not the problems discussed in the behavioral and heterodox literature that appear to be violated by people in their experiments. Binmore writes the following gem: "Nor does the theory [Revealed Preferences] insist that people are selfish, as its critics mischievously maintain. It has no difficulty in modeling the kind of saintly folk who would sell the shirt off their back rather than see a baby cry". Binmore doesn't say it explicitly, but hints that even the highly influential critiques of Amartya Sen in "Rational Fools" and elsewhere appear to be a bit strawmannish. The book is short and dense enough to be a reference. It is widely held that Bayesian decision theory is the final word on how a rational person should make decisions. However, Leonard Savage--the inventor of Bayesian decision theory--argued that it would be ridiculous to use his theory outside the kind of small world in which it is always possible to "look before you leap." If taken seriously, this view makes Bayesian decision theory inappropriate for the large worlds of scientific discovery and macroeconomic enterprise. When is it correct to use Bayesian decision theory--and when does it need to be modified? Using a minimum of mathematics, Rational Decisions clearly explains the foundations of Bayesian decision theory and shows why Savage restricted the theory's application to small worlds. The book is a wide-ranging exploration of standard theories of choice and belief under risk and uncertainty. Ken Binmore discusses the various philosophical attitudes related to the nature of probability and offers resolutions to paradoxes believed to hinder further progress. In arguing that the Bayesian approach to knowledge is inadequate in a large world, Binmore proposes an extension to Bayesian decision theory--allowing the idea of a mixed strategy in game theory to be expanded to a larger set of what Binmore refers to as "muddled" strategies. Written by one of the world's leading game theorists, Rational Decisions is the touchstone for anyone needing a concise, accessible, and expert view on Bayesian decision making. Get this book on Amazon | Barnes & Noble | Book Depository | iBooks See more books recommended by See more books written by
{"url":"https://theceolibrary.com/rational-decisions-11986.html","timestamp":"2024-11-14T09:05:40Z","content_type":"text/html","content_length":"66023","record_id":"<urn:uuid:95cd5fc0-a544-4e1f-a37c-f9f4c7a3669a>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00098.warc.gz"}
find the amount of each payment into a sinking fun - Asksia.ai Answer from Sia Posted about 1 month ago Solution by Steps step 1 We start with the equation for the sinking fund: $P \times \frac{(1 - (1 + 0.04)^{-9})}{0.04} = 6000$ step 2 Rearranging the equation to solve for $P$: $P = 6000 \times \frac{0.04}{(1 - (1 + 0.04)^{-9})}$ step 3 Calculating $(1 + 0.04)^{-9}$: $(1 + 0.04)^{-9} \approx 0.6651$ step 4 Substituting back into the equation: $P = 6000 \times \frac{0.04}{(1 - 0.6651)}$ step 5 Simplifying further: $P = 6000 \times \frac{0.04}{0.3349} \approx 806.958$ The amount of each payment into the sinking fund is approximately $806.96 Key Concept Sinking Fund Calculation The sinking fund formula helps determine the regular payment needed to accumulate a specific amount over time, considering interest rates. In this case, with 9 payments and a 4% interest rate, the calculated payment is approximately $806.96. Not the question you are looking for? Ask here! Enter question by text Enter question by image Unlock Smarter Learning with AskSia Super! Join Super, our all-in-one AI solution that can greatly improve your learning efficiency. 30% higher accuracy than GPT-4o Entire learning journey support The most student-friendly features
{"url":"https://www.asksia.ai/question-and-answer/find-the-amount-of-each-payment-into-a-sinking-fund-so-that-a-total-of-6000-can-be-accumulated-with-9-annual-payments-and-an-interest-rate-of-4-compou-Nqqov","timestamp":"2024-11-05T04:42:49Z","content_type":"text/html","content_length":"98852","record_id":"<urn:uuid:21e08e7b-6af5-41dc-8309-49dffe231996>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00457.warc.gz"}
[Solved] In Exercises find the positive values of | SolutionInn In Exercises find the positive values of p for which the series converges. n=1 n In Exercises find the positive values of p for which the series converges. Transcribed Image Text: n=1 n Fantastic news! We've Found the answer you've been seeking! Step by Step Answer: Answer rating: 71% (7 reviews) Answered By Albert Kinara i am an expert research writer having worked with various online platform for a long time. i also work as a lecturer in business in several universities and college part time and assure you well researched and articulate papers. i have written excellent academic papers for over 5 year and have an almost similar experience experting many clients in different units. bachelor of commerce (finance) masters in strategic management phd finance 4.60+ 26+ Reviews 48+ Question Solved Students also viewed these Mathematics questions Study smarter with the SolutionInn App
{"url":"https://www.solutioninn.com/study-help/calculus-10th-edition/in-exercises-find-the-positive-values-of-p-for-which-the-series-converges-n1","timestamp":"2024-11-10T08:24:04Z","content_type":"text/html","content_length":"79036","record_id":"<urn:uuid:38153da4-6da0-4548-b6f2-2546583f1c05>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00695.warc.gz"}
ADD / XOR / ROL After a long hiatus on this blog, a new post! Well, not really - but a whitepaper was published today titled " The Malicious Use of Artificial Intelligence ", and I decided I should cut/paste/publish two notes that apply to the paper from an email I wrote a while ago. Perhaps they are useful to someone: 1) On the ill-definedness of AI: AI is a diffuse and ill-defined term. Pretty much *anything* where a parameter is inferred from data is called "AI" today. Yes, clothing sizes are determined by "AI", because mean measurements are inferred from real data. To test whether one has fallen into the trap as viewing AI as something structurally different from other mathematics or computer science (it is not!), one should try to battle-test documents about AI policy, and check them for proportionality, by doing the following: Take the existing test and search/replace every occurrence of the word "AI" or "artificial intelligence" with "Mathematics", and every occurrence of the word "machine learning" with "statistics". Re-read the text and see whether you would still agree. 2) "All science is always dual-use": I am not sure how many of the contributors have done so, but it is a fascinating read - he contemplates among other things the effect that mathematics had on warfare, and to what extent science can be conducted if one has to assume it will be used for nefarious purposes. My favorite section is the following: We have still one more question to consider. We have concluded that the trivial mathematics is, on the whole, useful, and that the real mathematics, on the whole, is not; that the trivial mathematics does, and the real mathematics does not, ‘do good’ in a certain sense; but we have still to ask whether either sort of mathematics does harm. It would be paradoxical to suggest that mathematics of any sort does much harm in time of peace, so that we are driven to the consideration of the effects of mathematics on war. It is every difficult to argue such questions at all dispassionately now, and I should have preferred to avoid them; but some sort of discussion seems inevitable. Fortunately, it need not be a long one. There is one comforting conclusions which is easy for a real mathematician. Real mathematics has no effects on war. No one has yet discovered any warlike purpose to be served by the theory of numbers or relativity, and it seems very unlikely that anyone will do so for many years. It is true that there are branches of applied mathematics, such as ballistics and aerodynamics, which have been developed deliberately for war and demand a quite elaborate technique: it is perhaps hard to call them ‘trivial’, but none of them has any claim to rank as ‘real’. They are indeed repulsively ugly and intolerably dull; even Littlewood could not make ballistics respectable, and if he could not who can? So a real mathematician has his conscience clear; there is nothing to be set against any value his work may have; mathematics is, as I said at Oxford, a ‘harmless and innocent’ occupation. The trivial mathematics, on the other hand, has many applications in war. The gunnery experts and aeroplane designers, for example, could not do their work without it. And the general effect of these applications is plain: mathematics facilitates (if not so obviously as physics or chemistry) modern, scientific, ‘total’ war. The most fascinating bit about the above is how fantastically presciently wrong Hardy was when speaking about the lack of war-like applications for number theory or relativity - RSA and nuclear weapons respectively. In a similar vein - I was in a relationship in the past with a woman who was a social anthropologist, and who often mocked my field of expertise for being close to the military funding agencies (this was in the early 2000s). The first thing that SecDef Gates did when he took his position was hire a bunch of social anthropologists to help DoD unravel the tribal structure in The point of this disgression is: It is impossible for any scientist to imagine future uses and abuses of his scientific work. You cannot choose to work on "safe" or "unsafe" science - the only choice you have is between relevant and irrelevant, and the militaries of this world *will* use whatever is relevant and use it to maximize their warfare capabilities.
{"url":"https://addxorrol.blogspot.com/2018/02/","timestamp":"2024-11-05T18:39:23Z","content_type":"text/html","content_length":"67438","record_id":"<urn:uuid:d61e68dc-dac1-4c52-9aa9-f3f4969db025>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00854.warc.gz"}
Please note that the recommended version of Scilab is 2025.0.0. This page might be outdated. See the recommended documentation of this function Scilab help >> Polynomials > roots roots of polynomials Calling Sequence a polynomial with real or complex coefficients, or a m-by-1 or 1-by-m matrix of doubles, the polynomial coefficients in decreasing degree order. a string, the algorithm to be used (default algo="f"). If algo="e", then the eigenvalues of the companion matrix are returned. If algo="f", then the Jenkins-Traub method is used (if the polynomial is real and has degree lower than 100). If algo="f" and the polynomial is complex, then an error is generated. If algo="f" and the polynomial has degree greater than 100, then an error is generated. This function returns in the complex vector x the roots of the polynomial p. The "e" option corresponds to method based on the eigenvalues of the companion matrix. The "f" option corresponds to the fast RPOLY algorithm, based on Jenkins-Traub method. For real polynomials of degree <=100, users may consider the "f" option, which might be faster in some cases. On the other hand, some specific polynomials are known to be able to make this option to In the following examples, we compute roots of polynomials. // Roots given a real polynomial p = poly([1 2 3],"x") // Roots, given the real coefficients p = [3 2 1] // The roots of a complex polynomial // The roots of the polynomial of a matrix p = poly(A,'x') The polynomial representation can have a significant impact on the roots. In the following example, suggested by Wilkinson in the 60s and presented by Moler, we consider a diagonal matrix with diagonal entries equal to 1, 2, ..., 20. The eigenvalues are obviously equal to 1, 2, ..., 20. If we compute the associated characteristic polynomial and compute its roots, we can see that the eigenvalues are significantly different from the expected ones. This implies that just representing the coefficients as IEEE doubles changes the roots. A = diag(1:20); p = poly(A,'x') The "f" option produces an error if the polynomial is complex or if the degree is greater than 100. // The following case produces an error. p = %i+%s; // The following case produces an error. p = ones(101,1); See Also • poly — polynomial definition • spec — eigenvalues of matrices and pencils • companion — companion matrix • Serge Steer (INRIA) • Copyright (C) 2011 - DIGITEO - Michael Baudin The RPOLY algorithm is described in "Algorithm 493: Zeros of a Real Polynomial", ACM TOMS Volume 1, Issue 2 (June 1975), pp. 178-189 Jenkins, M. A. and Traub, J. F. (1970), A Three-Stage Algorithm for Real Polynomials Using Quadratic Iteration, SIAM J. Numer. Anal., 7(1970), 545-566. Jenkins, M. A. and Traub, J. F. (1970), Principles for Testing Polynomial Zerofinding Programs. ACM TOMS 1, 1 (March 1975), pp. 26-34 Used Functions The rpoly.f source codes can be found in the directory SCI/modules/polynomials/src/fortran of a Scilab source distribution. In the case where the companion matrix is used, the eigenvalue computation is perfomed using DGEEV and ZGEEV LAPACK codes. << residu Polynomials rowcompr >>
{"url":"https://help.scilab.org/docs/5.3.3/en_US/roots.html","timestamp":"2024-11-09T19:18:20Z","content_type":"text/html","content_length":"17416","record_id":"<urn:uuid:aec4865a-a629-451a-8af5-c2f3a13cec33>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00479.warc.gz"}
North Central Agricultural Statisticians | NCCC170 North Central Agricultural Statisticians Initial Proposal I. NCR — North Central Agricultural Statisticians II. Duration: October 1, 1990 through September 30, 1993 III. Justification: Statisticians who consult and do research in an Agricultural Experiment Station have a unique relationship with the University. Their role in service is usually much more extensive than other faculty positions within the university system. This consulting allows the Land Grant Institutions, through the use of agricultural statisticians, to perform its mission of agricultural research more efficiently than would otherwise be possible. In fact, the sentiments expressed by Agricultural Experiment Station Director Dr. C.W. Upp over 25 years ago are still valid. “It is my conviction that experiment station statisticians can be and usually are a means of saving research funds.” (Auburn University, August 17, 1962) As budgets become tighter there is a demand to make our research dollars go further. This requires that the experiment station statisticians continually keep abreast of the latest statistical methods in both design and analysis. There has been an increasing quantification in many fields of agricultural research. The development of appropriate statistical methods and recommendations for sound statistical practice have been major goals of statisticians affiliated with Land Grant Institutions. There are many areas of methodological development and of setting guidelines for practice where increased interaction among agriculturally oriented statisticians would be useful. An NCR committee could provide an extremely useful function by serving as a focal point for the development of methodology and the implementation of sound statistical practice. An important role of an NCR committee would be to provide input into NC committees for which statistical design and analysis are a vital component. An example might be committees dealing with modeling. An important component of modeling efforts include obtaining and analyzing empirical data. It is highly likely that a statistically oriented NCR committee could play a key collaborative role in working with other committees. This collaborative effort could be carried out by scheduling the annual meeting to overlap (by invitation) with an NC committee when the effected researchers feel they need such involvement. The formation of an NCR committee will enable the members to be acquainted with each others expertise. When an NCT or NCR committee would meet on a campus, that AES statistician could meet with the committee and would be able to draw on the resources of other AES statisticians. Since most states generally have but one or two statisticians assigned full time to the Agricultural Experiment Station, the station statistician is somewhat in isolation. However, the statistical problems which each one faces have a great deal in common. It is useful for the Agricultural Experiment Station statisticians to meet and solve some of their more difficult problems jointly and to discuss what steps have been taken by their counterparts in other states in similar situations. In this respect the statisticians can better work together to provide service to the agricultural researchers of their state. There exists a committee of University Statisticians of Southern Experiment Stations (USSES). There are several reasons to form an NCR committee, NC Agricultural Statisticians (NCAS). First, the USSES group already has a large membership and in order to be effective, the group needs to be small. The joining of the NCAS with USSES would surely be too large to be an effective problem solving group. Second, there are different types of agricultural problems in the NC. Utmost on the list is the limited water resources where studies involving irrigation, water quality, water supply and the interaction with fertilizer and pesticides are being conducted. Next there are many different perennial crops such as winter wheat, corn and arid crops such as grain sorghum. Finally, the livestock problems are different where the NC region has large feed lots, confined swine and tall and short grass prairies used for grazing. Many of the statistical problems faced by the NCAS and USSES are similar, but because of differences in the target commodities the solutions are different. For example, irrigation trials require much larger experimental units and these can be used to construct complex split-plot and strip-plot experimental designs. With the formation of an NCR, the research will be more relevant to the region. Because of the locations of states and interests of individual statisticians, some universities will be members of NCAS as well as USSES. Such joint membership will enable cross fertilization of ideas. IV. Committee Objectives: (1) To promote cooperative research among statisticians with interests in agriculture. (2) To meet annually to discuss unresolved statistical problems of mutual interest and to explore potential approaches to solutions. Discussions would focus on the statistical aspects of problems arising from agricultural research. (3) To facilitate more rapid transfer of statistical methodology and recommendations on practice to agricultural researchers which could involve collaboration with other regional committees where substantive statistical input would enhance the research programs.
{"url":"https://nccc170.org/proposals/project1990/","timestamp":"2024-11-13T18:06:51Z","content_type":"text/html","content_length":"21639","record_id":"<urn:uuid:fabe15ae-6234-4ced-80c2-9d5b2017f50e>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00657.warc.gz"}
40 - Joining Logic, Relational, and Functional Programming - Michael Arntzenius This episode explores the intersections between various flavors of math and programming, and the ways in which they can be mixed, matched, and combined. Michael Arntzenius, “rntz” for short, is a PhD student at the University of Birmingham building a programming language that combines some of the best features of logic, relational, and functional programming. The goal of the project is “to find a sweet spot of something that is more powerful than Datalog, but still constrained enough that we can apply existing optimizations to it and imitate what has been done in the database community and the Datalog community.” The challenge is combining the key part of Datalog (simple relational computations without worrying too much underlying representations) and of functional programming (being able to abstract out repeated patterns) in a way that is reasonably performant. This is a wide-ranging conversation including: Lisp macros, FRP, Eve, miniKanren, decidability, computability, higher-order logics and their correspondence to higher-order types, lattices, partial orders, avoiding logical paradoxes by disallowing negation (or requiring monotonicity) in self reference (or recursion), modal logic, CRDTS (which are semi-lattices), and the place for formalism is programming. This was a great opportunity for me to brush up on (or learn for the first time) some useful mathematical and type theory key words. Hope you get a lot out of it as well – enjoy! So, welcome, Michael. So, you go by Michael, yeah? Because your online username is @rntz. And that's how it's pronounced, rntz? That's how it's pronounced, rntz. Where did you undergrad? Yeah, computer science. And so, when did you get into programming languages specifically in computer science? Very shortly after I got into programming. So, I think the thing that I, sort of, vaguely wanted to do when I started programming was make video games, which- That's a common one, yeah. Yeah. But I, sort of, very quickly got frustrated with the tools available to me, right? And started bouncing through programming languages. What program language did I start with? It might have been Python. It might have been Visual Basic. It might have been... I don't even remember now. But anyway, I eventually found my way to Lisp and to Scheme. And that was really sort of a revelation. I really enjoyed listing Scheme. And I just sort of started going down the building tools for making your own tools rabbit hole, right? Because any time I would try to do something concrete, I would get frustrated that it was hard. And I would think, how could I make it easier to do this thing? I started thinking about building a tool for that thing and if you keep doing that, you end up with program language. And I've been going down that rabbit hole for, I guess, more than a decade now. You just never stopped yak shaving. Yeah. Yeah, sort of. I've like narrowed my scope a lot, right? Which academia will do to you, right? You have to focus if you're going to get anything done. Yeah, yeah, well said. So, I find that a lot of us go through various paradigms and topics, like blocks-based programming or structured editors or logic programming or functional programming. Databases, you know. There are a bunch of different ways to improve programming. What was your arc through all those topics? Or was it not as winding? Did you kind of know early on? No, I mean, there's been some ramblings. So, the first, the place I embarked was Lisp and Scheme. Which is sort of- It's a common starting place for programming interested people. Right. And Lisp and Scheme had, sort of, a couple of interesting ideas that have stayed with me. Lisp when it first came out had dozens of ideas that other languages didn't have, like garbage collection. But nowadays, garbage collection is really common. So, garbage collection didn't leave a lasting impact on me, other than that, like, yeah. I don't like having deleting the manual memory, but that's solved. We know how to do that now. But the things that are, even now, not entirely mainstream about it are s-expressions and using s-expressions to represent all of your data, right, or most of your data. Functional programming, that's obviously getting much more attraction nowadays, but it's still not entirely mainstream. I don't know. I guess it depends on who you ask. And macros. And so, if early on, the one that most... seemed the most exciting to me and most cool was macros. Yeah. And I guess it goes hand-in-hand with the s-expressions thing. I guess it's almost less the s-expressions and more the homoiconicity. That's another phrase that I- You don't like the phrase? Well, I find it kind of ambiguous. People make a big deal out of it, but they can never define exactly what counts as being homoiconic. The thing that I think is important is it has a built-in data structure for representing the syntax of your language, but that's not unique to it. Python also has this. Most people don't know it, but Python has an AST datatype in the standard library. But also, this datatype is s-expressions is sort of the thing used to represent almost everything, right? It's not a special purpose datatype, s-expressions. You use its building blocks everywhere else. You use lists. You use symbols. You use numbers, right? In Python, if I went on to understand the AST, I have to go read the documentation specifically for the AST. In Lisp, I'm already... there's very little distance between the tools that you familiarize yourself for general programming and the tools that you use to write macros. And so, that's sort of what I think was making macro easier is what homoiconicity is. It's that the same data structures and concepts you used to write ordinary code are the ones you us to write macros. There's not a huge gulf between them, which makes it really easy to get started writing macros. Yeah, that's really well said. I think that captures part of what's really, really powerful about the s-expressions, macros, pairing. You can turn the tool on itself and use it in the same way you've been using it to do other things, but on itself. Yeah, I guess it's quite empowering, because I think it's in the theme of blurring the line between a creator of a tool and a user of a tool. Yeah, definitely. I mean, it's kind of intoxicatingly powerful, right? Everybody gets turned on to... I don't know about everybody, but a lot of people get turned on to macros. And then, some never stop trying to use macros to solve every problem, right? It's really fun to write a macro that gives you a little language resolving a particular problem. So, right, oh, and also, the other thing that's relevant. I applied to work at my dream job, which was a rank on Eve. Oh, with Chris Granger and, yeah, yeah. With Chris Granger and Jamie Brandon. And I should remember the names of the other people who were working on it, but I don't. Corey was after I applied. Yeah, I also sent an email. I don't know if applied is the right word for, "I want to work for you.", in an email. So, I think we have that in common. I imagine a lot of the listeners to the podcast are, well, saying, "Yeah, yeah, I emailed Chris, too." Yeah. Well, I know them. And then, they flew me out to interview. Oh, wow. Okay, great. And then, turned me down. You got farther than I did. I got a, I think, less than a sentence of like, "Sorry, we're not interested." Or like, "Sorry, no." Yeah, but talking with them was really cool and gave me a clear idea of what they were trying to do. And part of the core technology we're building on was this Datalog-like stuff. Yeah. Oh, that's when you first heard about... You first got into it? That's really where I first got interested in the relational algebra. No way! That's fascinating. So, I guess, I probably have said this in the intro to the podcast, but you're like really into Datalog, it seems. That's what you're basically about. So, that's a fascinating... yeah, a fascinating little historical tidbit that you got it from the- Yeah, my research direction has been determined by getting turned down for my dream job. Well, it's just so funny to think that Eve was... which just feels so outside of academia. They took things from academia, but like the fact that they were then able to influence academia, I just find that somehow fascinating and wonderful. Yeah, I mean, I think that academia is more open-minded than a lot of people might think. Yeah, of course, than their reputation. Yeah. I guess they're just people on like, you know- Especially grad students. Especially grad students. You give less as a person when you get tenure. I'll regret saying that at some point in my life. Well, I guess, there's like a period of time in which you can like choose which various sliver of knowledge you're going to be an expert on. And once you've established that, it's not... You can still change, but once you've established- But it's hard, right? It's kind of a sunk cost thing, but it's also like... You're already there. You might as well just keep going. Yeah. You can think of it as sudden cost or you can think of it as you had built up an expertise in a very specific area, and it's sort of a matter of your relative advantages, right? You have a lot of knowledge of this area, so you have a relative advantage in working at... starting in a new area is like starting all over again. Yeah, yeah, exactly. So, when you're like a grad student, it's very easy to be influenced by things. But then, 10 years from now, you're not going to want to... If the next Chris Granger comes up with a new company in 10 years with a new direction, you're like not going to switch to that thing, you know? It's like a one-time thing, maybe. Yeah, or it gets harder. It gets harder, yeah. Or less common. Yeah. I've lost track of where we are. I'm curious to how you originally got an FRP. And how you found that. I'm still obsessed with it. I've been obsessed with it for years. Since I saw React JS, I was obsessed with it. I was into all the front-end frameworks. Then, finally, I found Conal Elliott's work. And then, I was like... Ahhhhh. And I'm really into it. And now, I'm like annoying, because I'm so into it. And I have like nobody to talk to, because I almost feel like it was like... Or anyways, the people I talk to aren't really interested in it the way that I am. Yeah. I mean, I think it's sort of gained a brief moment of being slightly more mainstream, especially with Elm, right? And then, Elm actually kind of abandoned the FRP and approach. And there haven't been a lot of attempts to really push it forward since then. I mean, there's been academic work on it, but it's not in the spotlight anymore. And it was never hugely in the spotlight. So, I got interested in it, more or less, because, yeah. I think Elm might have been part of it, exposing me to it. And it seemed like a nicer way to write user facing programs. It still seems like it might be a nicer way to write user facing programs. Although, I think my attention has turned more generally to the problem with incremental computation. Which FRP, I think, is... or dealing with change is how I would summarize the problem as I see it of front-end programming. Yeah. Well, I guess because like events, like, you know, you have some UI. And it mostly stays the same. But as user interact with it, the UI slowly changes. Yeah. But also, the external world is changing, right? You're running a website where someone has a shopping basket, right? And maybe you're trying to do the distributed setting. Now, things can change. They can change at different places, at different points in time. And you have to integrate all of that somehow. That's interesting. I guess, because I don't... I usually make the distinction in my head between batch programs and then, reactive programs. Reactive programs like respond to the environment. Batch programs need to do something to process once. And I guess what a reactive program is is something that changes over time, but it has inertia. It's not a step functioning thing. It's usually a smooth kind of changing thing, smoothish. Occasionally, I'll press a button and the entire page will change. But usually, it's like- Most changes are small. Most changes are small. Anyways, maybe that doesn't make any sense. No, I mean, it makes sense. Continuity is sort of a huge theme that connects to everything, if you look into it deep enough. And I don't fully understand it. Almost like differentiability. Yeah. In a certain sense, only continuous functions are computable. There's this connection with topology and computation that I do not fully understand. I see. That's interesting. Anyway, yeah. So, I got into FRP, because I was interested in it as a better, or nicer, model of writing UI programs. And why did I end up not so interested in it? I guess I basically got sidelined in my mind by the Datalog and Datafun and relational programming ideas. Cool. Well, so, it feels like you first got into the logic programming, then got into relational programming? Or it kind of happened at the same- Happened at the same time. I treat them as kind of the same. Oh, relational programming and logic programming. Oh, they're kind of synonyms. Because to me, one feels like... Relational, I think SQL databases. And logic, I think Prolog. But I guess part of your thing is you want to unify them. Oh, I don't know about whether unifying them fully, but I do think of them as strongly related, right? Do most logic programmers think of that in that way? I don't know. Certainly Will Byrd and his collaborators, Friedman referred his work as relational programming. Will Byrd is a... How do I explain Will Byrd? He's an academic. He's, I think, probably mostly known for a miniKanren, which is a relational/logic programming language that is distinctly not Prolog. It's built in Scheme. And it's, sort of, notable feature, well, there's two things I would mention. First of all, there's a mini version of it called microKanren, which is notable for having an implementation that is like 50 lines long. And that has been ported to every language under the sun, because it has an implementation his about 50 lines long. But it captures the essence of miniKanren, so it's a really fun thing to play around with, if you're interested in relational programming. And the interesting thing about miniKanren, beyond that is that unlike Prolog, its search strategy is complete. So, Prolog, if you write a particular thing that you read it logically specifies something. Like, let's say, you write transitive closure. So, you have edges in the graph. You have relation-predicate-edge that tells of two arguments that says an edge from this node to that node. And you want to find a predicate that gives you reachability that tells you there's a path from this node to that node. If you do this in the obvious way, which is just there's a path from X to Y, if there's an edge from X to Y. And there's a path from X to Z, if there's a path from X to Y and a path from Y to Z, right? It's edges, but transitive. If you do this and you feed it to Prolog, it will infinite loop and generate nothing of interest. Because it'll just keep generating extra facts that you already know? Yeah, so, it'll take the second clause, which is a path that can be built from a concatenation of two paths. And if you ask it, "Hey, what are the paths?" It will keep applying that second rule Because it does sort of depth for search and the search tree you've given it, it has infinite branches. And so, this is really annoying from a pure logic point of view. I've given you the logical definition of this. Why aren't you computing it? This is the promise of logic programming discarded for the sake of a simple implementation. And miniKanren is like, "No, we will not discard that promise." We give the complete search strategy. If you give us some rules, we will give you eventually all of their consequences. No matter how you order the rules, no matter what you do, we will eventually find all the consequences of these rules. And they're able to do this, because...? They changed the search strategy. Oh, it's just an implementation detail, like- It didn't restrict the way you could- There are a couple of features in Prolog that are specifically about the search strategy and are about sort of extralogical things. So, for example, there was the cut operator, or the bang operator, that prevents backtracking. That's about the search strategy. It prevents backtracking. Wait, this is in miniKanren? No, cut is in Prolog. It is not in miniKanren. So, and miniKanren's almost like higher level? It wouldn't let you- Yeah, it would not let you do this. Explain the... like, direct the search strategy. That's not entirely true. The order in which you put things will affect the order in which the tree gets searched, but it will eventually find... search the whole tree. How will it know when to stop, in a way that Prolog doesn't know when to stop? So, if you give it an infinite search tree, it will never stop. But if you give Prolog an infinite search tree, it might never stop and also not explore the whole tree. Right? So, it might just get stuck going down one particular branch of the tree and never come back up. Whereas, miniKanren is more like doing... It's not doing a breadth-first search, but it's doing something more similar to a breadth-first search where eventually, it will reach any node in the tree. Oh, but it might keep going forever. But it might keep going forever if the tree's infinite. Yeah, that's your problem. Right. Datalog, on the other hand, simply does not allow infinite searches. That's the area I focused on. I focused on very decidable logic programming. Okay. Well, let's rewind and unpack some of these terms, because I want to give... I want to use this opportunity to give a good foundation for these topics, because I think a lot... most of us, I think, have heard of these things, Prolog and logic programming and relational things. But anyways, I just want to start on firm foundations. So, when I hear relational, I think of Codd and So, maybe give like the brief history. Is that kind of where relational came from? Yeah, yeah. It's a perfectably reasonable thing to think of when you hear relational, right? Like, you created relational algebra or relational calculus. I still don't know what the difference between those two is, by the way. And from that, came SQL and most of our modern database work. And so, when I think of that, I think of path independence and normalization. That's where my brain goes, but is that... That's not- What is path independence? The opposite of path independence is when I have like a nested JSON data structure, I realized like, oh crap, I actually want... If I have a list of people and each person has a list of favorite things. I'm like, "Oh crap, actually, I want to know how many distinct favorite things there are." Basically, I know I can get in trouble if I just nest the data structure in the way that I'm going to want the data. And then, I'm like, "Oh crap, I actually want the data a different way." And usually what happens to me is I end up taking that data structure and then, unfurling it into orthogonal lists that point to each other. Yeah. Which is very much the, sort of, relational approach, right? Just have a bunch of relations saying how your data relates. Don't think too hard about nesting everything, so it's deficient. Leave that up to the query optimizer and hope you have a good query optimizer. What is the relationship between relational algebra and SQL? So, relational algebra is this formalism that Codd came up with in the 70's...? Could be the 60's. I could be wrong. But anyway- Is it like lambda calculus is to functional programming? Kind of, yeah, right? So, SQL can be thought of as an implementation of relational algebra plus some other stuff, except not quite. So, it's relational algebra plus some stuff, but it gives up on some of the simplicity of relational algebra. For example, it has bag semantics, not set semantics. So, there's a difference between having multiple of the same thing, right? And it also adds some stuff to relational algebra that's really important like aggregations. So, anyway, before talking about what it adds, what is relational algebra? Relational algebra is you have a bunch of relations, right? A relation is basically just a set of tuples, right? So, it's a- Okay. And a tuple is just like a dictionary or a object? Like, it's key values. You can think of it as key values, if you'd like, right? But you think of a relation has a bunch of columns, right? Like, there might be first name, last name, user ID. The elements of a relation are individual rows. We taught the value of first name and the value of last name and a user ID. And that's what a relation is, right, so it's a collection of rows. And all the rows have the same shape. They have values for each column. And you say that's a set of tuples. So, it's not a list. It's- Yeah, it's not ordered. It does not care about duplicates. Okay. And ID's, did we talk about that or not? Not yet. No, no, that's not particularly important. I don't even know whether the concept of a primary key is in the relational algebra. It's certainly not- Again, that's sort of in my mind, I haven't read any of the original stuff on relational algebra. I only sort of second sources. I read Wikipedia. But in my mind, that's sort of just a concept layered on top of it that formalizes a pattern of using relational algebra, right? So, you have relations. Now, how do you use relations? And the answer is you have various operators that combine Some of them are simple filtering. You can say, "Throw out the things in this relation that don't satisfy such and such a condition." Union, if two relations have the same column names, right, or contain the same shape of stuff, you can take their union. And then, the most interesting one, of course, is relational joins. And what a join is is... Actually, perhaps before we even talk about joins, we can talk about cross product. Unions, I thought that was what joins were. No, so a union is just like give me anything that is in either of these sets. Oh, yeah, yeah, yeah, yeah. Right. So, it's literal set theory union. So, a relation like... I usually think of a... In a database, you have a customer table and it's all of the customers. But a relation wouldn't be... Like, I could have a relation of two different subsets of customers and make a union. Yeah, you could if you wanted to. And you could do the SQL, too. SQL has unions. They're not all that commonly used, but they are there. I see. So, joins are- joins are like the thing. All this other stuff is useful and sometimes, necessary. But joins are the single most common operation. And what they are, the way I like to think of them, although this may not be immediately obvious is they're a cross product followed by a filter followed by a projection. So, hold on. What are each of those things? A cross product is just I have two relations. Give me all possible pairings of things from those relations. So, if I have a table of customers and I have a table of ice cream flavors, let's say, I have Charlie Coder, user ID 0 and Hilary Hacker, user ID 1. And I have chocolate and strawberry. The cross product will be Charlie Coder user ID 0 strawberry, Charlie Coder, user ID 0 chocolate, Hilary Hacker, user ID 1 strawberry, Hilary Hacker, user ID 1 chocolate. Right? All possible combinations. This can get very big. So, okay, why would you want to do that? Well, you can then filter this by some predicates. And I've chosen a bad example, because there's no obvious way those things are connected. Maybe we can say that the parity of somebody's user ID determines whether they like chocolate or strawberry ice cream. Or another way to do it would be you have... a realistic way is you have a table of users. And then, a table of orders. You know, the user ID and then, the order ID, right? So, the table relating user IDs to order IDs and a table relating user IDs to their names. And you want to have the order IDs and the names paired together. So, you can take your cross product, which just gives you every possible user pair with every possible order. And then, you filter it down by requiring that the user IDs match. So, that's called an equijoin, because you're requiring the two things to be equal. And then, you, yeah, you throw out the junk you predict. That's not particularly important, right? Because you have two copies of the user ID column and you're requiring them to be equal and simply throw out one. Okay. So, the select is kind of the project. The join is the cross product. And then, the predicate is the join on- So, yeah, this is when you have two relations and you want to correlate them somehow. You want to say, "Hey, give me all combinations of things from this relation and that relation that satisfies some predicate.", right, where these things match. And that, to me, that's the relation algebra. You have relations. You have joins. You have a few other things like unions and filters. And you're done. And you can do a whole lot of stuff with this, but not everything. For example, if you want to have the sum of something, the relational algebra does not do that. It benefits relations. A sum is not a number, not a relation. All right. It just does not have Oh, okay. That's interesting. Which, I mean, obviously, this is a limitation. It's not as if anybody has ever thought, oh, that's enough. Why would you leave that out? But I guess, when you have an algebra, you have types. And you have operations on those types. So, if you had... like we had algebra for numbers and we can add 1 and 2, and it'll give you the number 3. But let's say I want the word "three". We need the word "three", but like it would never give you the word "three". It would just give you the number 3. Right. It's useful to formalize numbers, even without formalizing how to print them in strings. Because, well, you can add that part if you like. But here's how we do numbers. Relational algebra is here is how we do relations. And then, you can add extra stuff on top of that. And SQL does and it's useful. Well, it's interesting, because Datalog goes in a totally different direction. Datalog adds some- So, Datalog, maybe give the logic programming background. Yeah. So, well, one way of explaining Datalog, so Datalog can be thought of as a logic programming language. It can be thought of as a database language. It's, sort of, somewhere between the two. Okay. So, sorry for interrupting. Keep going with what you were saying. Yeah. And one way of thinking about it is it takes relational algebra and it adds something to it, just like SQL, but it adds a completely different thing. It adds the ability to define relations, to construct relations recursively. And so, the classic example of this is what I already gave, transitive closure in a graph. You have the edges. And you want to find all the pairs of nodes, which are reachable from one another. So, an edge relation would have a source and a bits column. And in relational algebra... Pick any number, N, and I can find you the N's distance paths. I see. I see, because- I can't find you all the paths. I see. I see. Because if you do the cross product of once, that gets you one and then, you filter. That gets you one path. And then, you can do the cross product again. We can do the cross product infinitely or, like, until it doesn't change or something like that. I see. Okay. So, yeah. I want to spend a lot of time talking about computability with you, because I feel like... because I think that's something that comes up a lot when people discount logic programming. It's too slow or it'll infinite loop forever. Basically, it's like too abstract. It's like let's stick closer to the bits, because we know that if we're controlling all the bits, we know the program will end, because we have a tight reign on it. So, yeah. So, I guess, maybe, let's talk theoretical. What is computability? Well, whether something can be computed or not by. Usually, we think of a Turing machine or whatever. It hardly matters. Is it related to decidability? Yeah, decidability is the same thing, basically. Decidability, strictly speaking, is pose a question with a definite yes/no answer, right. Or think of a question, a class of questions parameterized by something, right, with definite yes/no answers. So, an example would be "are these two numbers equal?" So, that's a class of questions. It's not a specific question. It'd be like, "Does two equal And, of course, that can be answered by a machine. Just build a machine that returns no. But it only gets interesting once it's a class of questions. So, it's whether two numbers are equal. So, it has two placeholders, two variables in it, X and Y. You can them whatever. That question is decidable, if your numbers are natural numbers. It is not decidable if your numbers are real numbers. Oh, I see. Because real numbers could be infinite? They could- Yeah, real numbers have infinite precision. And you cannot tell in advance how many digits you'll have to look at. You might have a number that you... Let's say you have the number one, two, one, three, four, five, six, seven. And you have another number one, two, one, three, four, five, six, seven. And they keep going. And they keep going forever. How do you know when you're done? How do you know that they really are equal? Maybe, there's a digit that's not equal just beyond where you looked. So, decidability and computability has a lot to do with looping for forever? Yeah, right? To say that a question is decidable is to say there is a Turing machine, or a computer program, that will answer every single question of that form. And for each one, it will answer it in finite time, right, with yes or no. So that it never infinite loops on any particular instance of that problem. If it infinite loops on some instance, then it's not a decision procedure. So, the problem is not decided by that program. Okay, right. So, the real number equality is an interesting case, because it has the property of these two numbers are not equal. Then, the sort of obvious program, just compare the digits one by one, right, will eventually say, "These aren't equal." If two numbers aren't equal, eventually you'll get to a digit where they differ. And you'll be like, "They're not equal." It's only when they are equal that- You won't be able to terminate, yeah. Oh, I see. I see. That is interesting. Yeah, so that's called either semi-decidability or co-semi-decidability. I never remember which is which. Oh, I see. Semi-decidability is when it's decidable in specific cases. Semi-decidable is when like you have a program that can... that if the answer is no, it will eventually answer no. But if the answer is yes, it might infinite loop. Okay. And so, I forget how we got in this tangent. It's related to Datalog? You're sort of thinking about decidability and computability and logic programming. Oh, okay. I think, let's unroll the stack. And before we continue down this thread, I want to ask you about... You were playing at three things, relations- Right. Tables, relations and predicates. Okay. And do you remember why... oh, because you said... you were explaining how you can get to Datalog from the relational path. Or you can get to it from the logic path. Do you know the history of Prolog, which way it come from? Or was it influenced by both? So, I think what's sort of... I'm not sure. Datalog, sort of, rose... It was after the relational algebra. I think it arose mostly in the 80's, but people sort of noticing... I don't know whether they noticed that the syntax looks like Prolog. So, based on that, I imagine they noticed, "Hey, if we limit Prolog in such and such a way, then suddenly it is decidable, right?", which is to say, in Prolog you can ask queries where it will infinite loop. In Datalog, you cannot. Every program in Datalog terminates. Every query in Datalog terminates. It will always answer any question you pose of Well, what did they remove? Basically every predicate, or relation or table, in a Datalog program has to be finite. So, in Prolog for example, you can find a predicate that takes three lists, I'll call them X, Y, and Z, an is true if X appended to Y is Z. And you can run this. One of the wonders of logic programming, you can run this relation in any direction. So, you can give it two lists, X and Y, and it will give you, spit back at you, the append of these two lists. But you can also give it one list for Z and it will spit back all the lists, which when appended make that list. In other words, it will find all the ways to split a list into two smaller lists. And these are both the same relation. You write it once and you get both ways of doing it. I may have missed it. So, the same relation goes in both... So, maybe give me an example. Maybe give me concrete lists. Sure. So, if I say, you know, append list containing 1, list containing 2, Z, right? Where Z is a variable. It's an unknown. And I'm asking it. When you do this, this is called a query. And it's saying, "Give me all the values of Z such that this is true." Such that the append of one and two is Z. In a normal program, it would just say Z equals append one, two? You just give the logical expression that you want to be true and it finds the solutions. And in this case, the solution is Z equals one, two. But you can also put variables for the other parts of it. You can say, "Give me X and Y such that append X, Y is one, two." All right? So, this is like saying one, two equals X plus Y, which in a normal programming language would be a syntax error, probably. I see. I see. I see, okay. But in Prolog, it will give you back multiple answers. It will say, "Okay, one solution is X is the list one, two and Y is the empty list." Another one is X is the list one and Y is the list two. And the final one is X is the empty list and Y is the list one, two. And the same code, and I can write down the code. Well, I mean, this is a podcast so that's probably not helpful, but I can write down the code for this. It's not very complicated. And it can be used in both directions. Yeah, just write down the code and show it to the microphone. Okay. And so, this is almost like... this can lead us to bad things in Prolog? This can lead to undecidability? I mean, the one time, this is part of why Prolog is awesome. And it's also dangerous, because it makes your language more powerful and it can lead to programs that infinite loop. This is not particularly any more dangerous than any other Turing-complete programming language. Every programming language that we know can write infinite loops, except for ones that are very, very carefully limited like Datalog. Datalog cannot infinite loop. Cannot infinite loop. Most programming language, you could just write "while true." Datalog doesn't have "while true". No, right. The equivalent of "while true" terminates with false. That's a little bit of an unfair comparison. But there's a concrete example of this, which is in Prolog, you can feed it, not the liar's paradox. So, it's a logic programming language. So, you can actually translate paradoxes into it. But the liar's paradox is this sentence is false. And this is problematic, because if it's false, it's true. And if it's true, it's false. But there is another, not exactly paradox, the truth teller's paradox. I'm not sure if that's the standard name, but it's this sentence is true. And this isn't really a paradox, because like you can say it's false. And then, it's false, right? Because it says it's true and it's not true, so it's false, okay? But you can also say it's true, because it says it's true. And it's true, so it's true. So, it's unclear what truth value it should have, but it is not paradoxical to assign to a particular truth value. Now, if you feed the equivalent of this to Prolog, you say basically foo holds as the variable X, if foo holds as the variable X. Foo of X, if foo of X. And then, if you ask Prolog, does foo hold of two? It will infinite loop. Now, in Datalog, if you do the equivalent thing, it will simply say, no. No, it's false. So, Datalog has an answer to the question "What is the value of this sentence? This sentence is true," and its answer is false. And the reason for this is, basically, Datalog has a, sort of, minimum least fixed point semantics. Or it's sometimes called a minimum model semantics. But what it basically means is that if you don't say something is true, it assumes it is false. For example, if you say, "There is an edge from two to three." And then, you end your program, right? You say, "That's all there is in the program, if there is an edge from a two to three." And then, you ask it, "Is there an edge from three to seven?" No. You didn't write that, so it's not true. So, it infers only the minimum set of things consistent with the program that you've written. It will not infer anything that you didn't write. And so, if you say, "Foo of X, if foo of X." It will not infer that foo of two is true, because there's no way to get to that. It's consistent that it be true, just like it's consistent with what you wrote down, that there being an edge from four to seven. You didn't explicitly say that it was false. But, sort of, normally, we only write down the things that are true. Sort of intuitively, if we're describing something, we say all the things that are true of the situation, not all the things that were false. Because there are too goddamn many. And so, based on that, right, based on that idea, only assume things are true if there's a clear way to prove them. Datalog will give you, sort of, the minimum level. It will not... Yeah. I feel like there's a phrase of mathematics that does this, that only proves things that are- That's sort of what minimum model or least fixed point is. So, we talked through the basis of relational stuff was relational algebra. The basis for logic programming is logic. First-order logic, yeah. And first-order refers to? First-order means you can quantify over object, but not over sets of objects. So, first-order logic is the kind of logic that we're, sort of, most familiar with, right? We can say things like, "For any number, X, X plus one is greater than X." And calling it a first-order means that you can write that for any number X. So, there's also something less powerful than that, which is propositional logic where you cannot write "for all." You can take primitive propositions and you can conjunct them. You can say, X and Y. You can disjunct the next, or Y and so on. But you can't quantify over all variables. And then, there's higher-order logics, which let you effectivel, not just quantify over individual things, like numbers. But let you quantify over properties of numbers. For any property, P of numbers, there exists a number X, which P satisfies. This isn't true. That's a false proposition, because consider the property that doesn't hold of any number. But it allows you to quantify over even larger stuff. And now, I see how it's higher-order. I see how the phrase makes sense. You can have specific propositions about specific members. And then, you can have propositions for a bunch of numbers. And then, you could have propositions about propositions. I see. Yeah. And the weird thing is... This is a total tangent, but sort of the weird thing about this is, so logicians, more or less, figured out propositional logic, then first-order logic, then higher-order logic. There's obviously still work on each of these things, but sort of that's the order in which they started considering things. Type theorists and programming language theorists figured out the equivalent of propositional logic, very simple type systems. And then, they figured out precisely second order type systems, right? Type systems that let you quantify over types, but not over values, over types. And then, we're beginning to figure out... Well, we are figuring out dependent types, which are kind of first-order, as well as higher-order. Oh, that's interesting. So, it's kind of like we skipped over the just first-order phase. There are type systems that are kind of directly correspondent to first-order logics, but they're kind of weird. The first thing they figured out was, so called, parametric polymorphism, which is where you're allowed to quantify over types. You're allowed to say, "This function has type... For any type alpha, it takes alpha to So, in my head, I have... in Haskell, I'm thinking Int -> Int is like first... is the base. That's about... There's many different kinds of higher-orderness in programming languages. And so, one of them is like, "Are your functions higher-order?", which I think is what you were thinking of. Yeah, you're right. I don't even have to talk about functions. Maybe, I was missing it. No, no. I just was over-complicating my example. We have Ints. And then, we have lists of Ints, which is... and like a list of an Int, is that polymorphic at all or higher-order? That's still first-order? Can be parameterized... That's like even a different direction. That's talking about types parameterized by other types. Well, it's related, but it's not exactly the same thing. So, quantification in a type system is, for example, having a function, take the identity function for example, that works at any type. Or the map function, map's a function. Well, that complicates things, because it also involves lists. But- If you can like describe types. If you have a type that admits other types, like subtyping...? It's not about subtyping, not really. It's about: what is the type of the identity function? Well, you could say it has the type Int -> Int, because it takes Ints. You can say it has the type Bool -> Bool. But neither of these is really a most general type to give it. The most general type to give it is to say, "For any type A, it is type A -> A." That "for A" I put there, that's the equivalent of a "for all" in logic. But what is it quantifying over? It's not quantifying over values. It's not for any value X. It's quantifying over types. It's for any type of X. And that's what makes it a second order quantification. Is it but only useful for the ID function? No. It's useful for a lot of other functions. They usually have to involve some sort of data structure. So, an example would be the map function that takes a function and a list and applies the function to every part of that list. It doesn't care whether it's a list of integers or a list of booleans. So, it has the type: for any type A and any type B, give me a function from A to B and give me a list of A. And I'll give you a list of B. I guess, I normally think about... yeah, it doesn't occur to me that there's an implicit "for any A" and "for any B". The only time I think of the implicit "for any" if it's like "Show A => ..." you know? Because then, it's like, "Oh, okay, this is for any Show." Yeah, because then it's explicit in the syntax. Yeah, but to a type theorist, we think of there being an implicit form "for any A". For any A. I see. I see. And, okay, that's interesting that we skipped... So, the middle level that we skipped would be- First-order, first-order. The equivalent to first-order logic. So, what would be a type in first-order? Well, the reason that we skipped it is it's very not obvious what it would be, right? Because- It's just less useful? Well, I don't know about less useful. The obvious example that I could give is dependent types. But dependent types don't really correspond to first-order logic. They correspond to very higher-order logic, like all the bloody layers. There's not a clear correspondence anymore. Oh, okay, interesting. But the thing that dependent types allow you to do that is like first-order logic is they allow you to quantify over all the values of the given type. So, I can say, "For any natural number, N, this will take a list of length N and of, I don't know, integers and return a list of length N and integers." So, I'm not quantifying over a type there. I'm quantifying over natural numbers, right, for the length of the list. That's like what first-order logic lets you do. Okay. Interesting. Okay, so- That was a kind of huge tangent. Yeah, yeah. Well, I feel like this has been great. I feel like we've been talking about interesting things, but we should probably get to your main project. I think we spent enough time laying the foundations and talking around it. So, yeah, give the quick summary... The spiel for Datafun. So, we have Datalog, right, which is this language that can be thought of as logic programming, but limited, right? Limited so that it's no longer true and complete. It always terminates. But because of those limitations, we have, for example, but much more efficient implementation strategies for it. And, yeah, I mean, that's basically the idea. It makes the implementation strategies more efficient and do interesting things. Or you can think of it as relation algebra plus fixed points, so it's like SQL with extra stuff... Except aggregations are a pain. So, I'll talk more about that later. But anyway, it's between these two cool areas, logic programming and relational programming. Datalog. That's what Datalog is. But what Datalog doesn't let you do is it doesn't let you notice that there's a repeated pattern in your code and break it out into a function. This is an ability that logic programming has, because logic programming doesn't have the limitations of Datalog, right? But once you impose limitations to Datalog, which are nice, you lose that ability. But it's also something that functional programming has, because we have functions. See a repeated pattern? Just write the function that encapsulates that repeated pattern. Take the parts that are varying and make them arguments to the function. And take the parts that are constant and make them the code of the function, right? And it seems like this would be a useful ability to have in Datalog. For example, transitive closure, the standard Datalog example. You have a lot of graphs in your life. Yeah. You can write transitive closure in Datalog, but you cannot write a function that, taken a graph, takes its transitive closure. It only works for specific graphs. Right. You have to hard code. You have to pick a relation that represents the graph that you want to take the transitive closure of and write the thing that takes its transitive closure. And it's hard coded to that graph. You cannot plug in a different graph. It's like writing macro to plug in a different graph or the ability to write functions. Right. Or add the ability to write goddamn functions. So, that's kind of what Datafun is. It's an attempt to allow you to write what is effectively Datalog code, but in a functional language so that if you see a repeated pattern in your code, you can just abstract that over them. And along the way, we sort of end up adding a bunch of interesting things, because it's easy and natural to add them in the context of a functional language. So, for example, we can add types. Datalog is traditionally kind of untyped. There's no particular problem with adding types directly on Datalog. But as long as we're going through a functional language and we know how to use types for that, we add those. So, you can have sum types now, if you want sosumtypes. Also, lattices, so Datalog... How do I explain the use of lattices in logic programming and in Datalog and in Datafun? I always forget what a lattice is. So, in this case, what I'm actually concerned with are join semi-lattices. People often call them lattices, because saying join semi-lattice every time gets to be a mouthful. But what that means is you have... There's two ways of thinking about it. One way of thinking about it is you have a binary operator that is associative, commutative, and idempotent. So, associative, the parens don't Commutative, the order doesn't matter. Swap things around as much as you'd like. Idempotent, doing things twice doesn't matter. X join... the operatives are usually called join, which is confusing, because it's not database join. It's a different operator. So, X join X is X. That's what idempotence means. And it has an identity element, a thing that does nothing. So, the classic example of a join semi-lattice is sets under union. Union is associative. The parens don't matter. It's commutative. X union Y equals Y union X. Order doesn't matter. It's idempotent. A thing union itself is that thing. Adding a set to itself. It has the same elements. Right? And the identity element, the thing that does nothing, is the empty set. Addition and multiplication are? Addition and multiplication are not semi-lattices, because they're not idempotent. Oh, if you add a number- Two plus two is four. But maximum is a semi-lattice on the natural numbers. Minimum on the negative numbers would be. I need an identity element. So, let's go through each of these properties. Maximum is associative, yes? It's commutative. X max Y is Y max X. It's idempotent. The thing max itself is itself, but you need an identity element, a thing such that the maximum of X and this identity element is always X. Right. And so, zero, the maximum of something in zero is always that thing, as long as it's non-negative, right? So, max, different people differ on whether join semi-lattice needs a zero as where it needs a... but for me, I always insist that it have... that there be an identity element. So, in Haskell, we have like types, like monads and monoids and traversable. And we have like a typeclassopedia. Yeah, semi-lattice would be type class. Okay, great, that was my question. And where would it fit in the tree? And why don't we have it in the tree? Because there's too many goddamn mathematical concepts. Having all of them in your standard library would be a bit much. I mean, you can add it, right? I think, more practically, the reason why you don't have it yet is nobody has made a strong enough case for it to be in the standard library. It's not hard to add as your own library, right? Type classes aren't something limited to the core libraries. You can make your own. And that's exactly what I do when I need to use semi-lattices. But, yeah, what it would have... Well, okay, so first let me talk about the other way to think about the lattices. We've talked about them as an operator, but there's an equally important way to view them, which is as a partial order. Yeah, that's another phrase that- So, a partial order is just something that acts, sort of, like less than or equal to, except it doesn't have comparison. There can be two things, neither of which is less than or equal to the other. So, the classic example here will be sets under sub-setting. So, you say that set X is less than or equal to the set Y, if X is a subset of Y. Not necessarily a proper subset. And so, X is less than or equal to itself. It's a subset of itself, as far as I'm concerned. And so, this forms an order, in a sense, in that it's transitive. If X is a subset of Y and Y is a subset of Z, then X is a subset of Z, right? Or sub-setting might say included in where X is included in Y and Y is included in Z. It's reflexive. A thing is included in itself. A set is included in itself. And it's antisymmetric, which is if X is a subset of Y and Y is a subset of X, they are the same set. That's what we mean by partial order, those three things. And it's equivalent to a lattice? Every lattice is a partial order. Not every partial order are lattice. I'll get to that bit in a bit. But, yeah, it needs to be the thing like less than or equal to that has to be reflexive, transitive and antisymmetric. But it doesn't have to be total. There can be things that are just incomparable. Like, the set containing just one and the set containing just two. Neither of them is a subset of the other. Okay. So, when is... A partial order is a lattice, if it has a least element, a thing that is smaller than everything and any two elements have a least upper bound. So, a thing that is bigger than both of them, but smaller than any other thing that's bigger than both of them. So, the example here would be the union of two sets, right? And the least element, obviously, is the empty set. It's smaller than any other set. The union of two sets is bigger than both of them. It's a super-set of both of them. And anything else that is a super-set of both of them contains everything in the union, right? So, that's what a least upper bound is. And that's what a semi-lattice is as a partial order. It's a partial order that has least upper bounds and the least element. And you can prove these two things, these two views of them, the partial-order view and an operation, which is associative, commutative and idempotent, are equivalent. Because you can prove that the least upper bound operator is associative, commutative and idempotent and that the least element forms its identity, right? I'm just curious from a historical perspective, like was there a person, or an event, that joined these things? God, I have no idea. This is all really old math. Like, this is... Yeah. Was it Hilbert or someone before, like Euler? It wasn't Aristotle, you know? No. Yeah. I mean, there's like umpteen zillion variations on this, right? So, like there's semi-lattices. Some people take that to not mean not necessarily having the least element. And then, you can talk about semi-lattices with the least elements. And then, you can have meet semi-lattices, instead of join semi-lattices which is the same thing. Instead of having a least element and a least upper bound, you have a greatest element and a greatest lower bound, which is, right... And then, you can consider having both of these and that's what we usually call a lattice is if you have a least element, a top element, least upper bounds and greatest lower bounds. Subsets of a given set form a lattice, in that sense. The least element is the empty set. The greatest element is the whole set, everything. Least upper bound is union and greatest lower bound is intersection. And these structures are very well behaved. They have all sorts of operators. And they all interact nicely. And you can... anyway. So, anyway, yeah. You can view them as having to do with partial orders or you can view them as just being this algebraic structure. They have an operator and it obeys certain laws. So, why are semi-lattices interesting? Oh, right, because sets are a semi-lattice, right? And remember how I talked about we have relations and you have tables and you have predicates? Well, here's another thing to add to the list, sets plus tuples. Because a relation, or a table, can be thought of as a set of tuples. A tuple representing a row in the table and the set representing the whole And so, this is what Datafun does. It takes a functional language. It takes a basic simply type of lambda calculus, which is sort of like your vanilla starter base for functional language design. And it adds to it finite sets as a data type and tuples, which are easy and ordinary, right? And then, with those... Okay, so the next question is, okay, you have finite sets. But how do you make them and how do you use them? And the answer that I give is, well, making sets is easy. You just list the things you want to be in the set. But how do you manipulate sets? You use set comprehensions, right, which is you can basically say, "For every element X in the set Y do something, right, and give me the union of all those somethings." So, you can say, "For every element of X in the set one, two, three, union together the result of the set two times X, X plus two." So, the body of the comprehension is itself a set. It's not just one element. Yes, right. It's not just one element. It can be a set containing one element or a set containing the other- Yeah, a set containing these elements, but it turns out to be equivalent. You can do both ways and they're equivalent. The reason being if you just want to give one element. If you want to restrict it to... I mean, this is impossible to describe without pen and paper. So, I'm just not going to try it. But basically, basically, just lets you write down what you would normally think of or what a mathematician would normally think of as a set comprehension, within certain limits. So, you can't do infinite sets this way, but you can filter existing sets. You can take the cross product of sets, because you can just comprehend over two sets. For every X in the set A, for every Y in the set B, give me X, Y. And if you can filter and you can do cross products, then you can do relational joins. And so, you started from the lambda calculus and added set comprehensions... Which you didn't add relational algebra. But you can maybe prove that it's like equivalent or? It can express everything in relational algebra and some other stuff that's not in relational algebra. Because functions from- Yeah, for example, right. But, I guess, you could think of it as you took two algebras, like lambda calculus, or two calculi, and relational calculus, and you- Sort of, yeah. I sort of glummed them into one language. That's one way of thinking about it. But technically, you just did lambda calculus and then, you added some things. Yeah, lambda calculus adds set comprehensions and with a few other things. Like, if you want to be able to do a equijoin, you need to be able to test equality. So, add equality test and boolean, okay. Not a big deal. So, in functional language, before you came along, I was already able to filter. And I guess, set comprehension... what's the equivalent in functional language that we just have? You could have it as list comprehension, right? Set comprehensions are like list comprehensions, except for sets. And a list comprehensions... Comprehension is a weird word. I don't know why that word got there, but that's the word we use. I used list comprehensions in Python, but I think in other languages, I just use... It's just map. Yeah, list comprehensions can all be done, in terms of map and filter. Oh, okay. Map and filter. But then, I've never mapped over two- That's not true. You also need monadic join, another join that's different from all the previous joins we've discussed. . So, you need, basically, concatMap. ConcatMap, yeah, that makes sense. ConcatMap and from that, you can get filter. And, yeah, that's it. So, basically, it's concatMap. So, well, we already had this in functional languages before you came along? So, wait, you just invented a functional language that's just a regular functional language? Well, but it's not Turing-complete. Oh, okay. And you took a functional language and you just subtracted things. That's another way of looking at it. Yes, it is less powerful than almost every other functional language. Deliberately less powerful in the hopes that we can apply a bunch of prior work that's been done in the Datalog community and the SQL community on optimizing expressions of this form and optimizing Datalog evaluation and optimizing SQL evaluation. Because if you take that work and you try to apply it in a context of a full-blown higher-order Turing-complete functional language, it is a nightmare. But if we limit our language enough, we're hoping and we have some reason to believe that we might have some success in generalizing the existing optimization literature, so that you can write the stupid dumb obvious way of computing a join. And your compiler or your implementation will figure out how to do if efficiently using an index. Which is not something that existing functional languages do, right? The best you've got is, sort of, list fusion, which is an impressive optimization, but it's sort of like combining multiple passes over a list into one pass. Like, map, map, filter, map, filter gets combined into one single pass over the list. I didn't know that that's what happens sometimes. Yeah, this is an optimization that a bunch of people have worked on and they've gotten it to be pretty good. And that is table stakes for database query engines. That is just like nobody talks about that, because it's freaking obvious. And this is what happens when you make your language more powerful. Everything gets harder. And so, we're trying to find a sweet spot of something that is more powerful than Datalog, but still constrained enough that we can apply existing optimizations to it and imitate what has been done in the database community and the Datalog community. Yeah, well said. Personally, I'm someone who's really excited about the idea of less powerful languages. I don't know if you feel this way, but I feel that like our patron saint is the "Go to considered harmful" article in that like... It was very explicitly calling out languages are too powerful in this one explicit way. And if we get rid of this, it'll actually be an upgrade. And I guess, it spawned a whole class of articles. Like, X, Y and Z is considered harmful. But particularly in making languages less powerful, so that we get... So, almost like we make them more well behaved, so we get more mathematical properties out of them. Then, we can optimize them easier and other things like that. I'm a big fan of these theme. Yeah, I think I'm also kind of a fan of this. I don't know that it's the... I'm kind of a pluralist. I believe in taking every approach under the sun and seeing how it works out. But I think that trying to make less powerful languages, so that you can do more to the languages, is an under-explored area. What's the opposite of a pluralist? Because I feel like I'm someone who's like ideologic? Unitarian? I don't know. I kind of like to do the opposite. I don't like to just kind of try things until I find one that works. From first principles, how do I go? Or, you know- So, when I say I'm a pluralist, I say that as a sort of belief about how our research community should function, not a belief about how any individual should do research. I think it totally makes sense to focus on one particular idea, or one particular principle. I just don't think that any one principle is the only one we should be considering. Yeah, in the whole world. As a community, yeah. Yeah, sure, yeah, yeah. Of course. Okay. But this is in contrast to people who want to build one language to rule them all, for example. I think that's sort of doomed to failure, because I think that humans have many purposes. And programming languages are going to need to be built for many purposes, too. I think some of the most interesting ideas in programming languages have come from programming languages, which tried to do... tried to say everything is a something, right? Everything is an object. Everything is a function. Everything is a process. There's a list of these. There's a list of... for each language X, everything is a Y. I'm just trying to do a verbal set comprehension. I think it's on TiddlyWiki. It's like a list given language and Right. On the one hand, I think that some extremely interesting research has come out of this, right? Like, functional programming came out of lambda calculus where everything is a function. Object-oriented programming came out of thinking about trying to make everything an object. There are process calculi, which come out of everything is a process communicating with everything else. Logic programming comes out of everything is first-order logic. But at the end of the day, I don't believe any of it. Like, no, everything is not an object. Everything is not a function. Not everything is a process. Not everything is immunable to being described in terms of first-order At the end of the day, the world is complicated. And we need many tools, many approaches, to try and understand it. But the kind of single-minded focus on a single thing is how you find out what the limits of the idea is. Until you commit yourself to fully exploring a particular idea, you probably won't realize just how useful it is. So, this is sort of the sense in which I'm a pluralist. I do not believe that any one of these single-minded ideas will rule the world, but I think the world is better for having had lots of people who have tried to push these ideas as far as possible. I think, if I had to bet, that probably the view that would win-- and so, I feel like it's irrational for me to hold these two views -- but I am a "one thing will rule them all" kind of person. Maybe I should work on changing that about myself, because I explicitly notice that it's like a weird thing, or maybe it's just like a hope. I like the idea of... It's almost physics envy, you know? You've heard the term physics envy? It's like a derogatory term for fields that aren't physics that try to pretend like physics. Like, mathematize things or reductionize things, which is kind of what you're getting at. Everything is just this one... We reduced everything in the complex world to this one simple thing, because it's just more elegant to look at the world that way. So, I hold out hope that we'll figure it out. Yeah, is there a... And you're someone who builds on the lambda calculus. Is there a chance that everything in the function is the one that wins? Well, but here's the thing. Almost every language that builds on the lambda calculus drops everything as a function. When you're working with natural numbers in functional languages, are they represented as Church encoded natural numbers? No. They're represented as a bunch of bits representing a natural number and two is complement. Well, I think there's... That's kind of looking at implementation detail. Like, the semantics of a language- But it's not an implementation detail, right? It's there in the semantics, too. I cannot apply a number to a function and get that function applied that many times, right? That's what a Church encoded natural number is. I see. Yeah, yeah, yeah, yeah. Right? There is one language I know of, which sort of tries to take this idea to its logical extreme and like, more or less, encode everything. And that is... I hesitate to even mention it, because... Urbit tries to sort of do this. It has an extraordinarily simple virtual machine, which is almost a combinator calculus, right? And then, tries to build everything up on top of that. and if you can't do something efficiently, then rather than introducing a new primitive, they have this concept I think they might have called it "jets"... of like writing the code that does it inefficiently and having the compiler recognize that specific code and turn it into something that does it efficiently. Which is a cool idea, but I'm hesitant to mention the project, because it's tangled up with a whole bunch of ideological ideas about how programming should be organized and how distributive system should be organized. And it's also just filled with obscure jargon and it's really hard to decode what they're actually doing. It's funny you say this in this way, because I think it was last night, someone... You may have seen this. Someone I don't know on Twitter reached out like, "What do you think of Orbit?" I click on the new primer. I'm like, "Wow, this is so much better designed than I've seen in the past." And so, I post it on Slack, because the graphic design of it. Maybe you haven't seen it new. I don't know when it came out. And the first response was basically what you said. Like, there's some useful things here, but there's just so much noise and ideological stuff that like... It's just hard to focus on it. It and I think other projects like it, there are a lot of projects that are just super wacky. But you kind of have to focus on the parts that are worth noticing and then, just talk about those parts. You shouldn't throw the baby out with the bathwater, I guess. Yeah. And this idea about writing the inefficient way and having the compile recognizing does not need Orbit. I think somebody working on another wacky project called Avalon Blue who calls it... I wish I could remember their name. They call it "accelerators". So, it's an idea that's going around in the fringes of programming language design community. It's very cool. If you wanted to, you could like fully expand everything and see the inefficient, but recognizable code. But then, under the hood, the optimizations happen. Yeah. I'm just skeptical that we'll overall reduce the complexity of your system, because you still have to have that complexity, the inefficient implementation somewhere. It's just now that it's hidden beneath... It's hidden in your compiler implementation. The Out of the Tar Pit paper, I think, did a good job for arguing for something along these lines of like shoving your optimizations away from your more declarative code. Yeah, but I think there's a difference. So, like the approach that I'm taking is, again, trying to write declarative code. Write the obvious join algorithm, which is just loop over set A, loop over set B. If some condition is true, yield tuple whatever. Which if you implement it naively, is at least quadratic in complexity. And then, having... you might have recognized this. But it's like the accelerator approach is you don't just do that. You write the code that implements the naïve version of a join. And then, you have your compiler recognize when it is compiling that specific chunk of code, right, as if it's a reflection on trusting trust attack only being used to make your code go faster. And then, compile it into something more efficient. Which strikes me as trading... you're gaining one sort of elegance in that you just have one Turing-complete language as your source, but you're losing the simplicity of your compiler. You're not cashing in on the ability to make real That makes sense. It feels ad hoc. So, back to Datafun, one way to look at it is it's a functional language, but we remove some things -- I want to get into what those things are -- in the hope of getting it as performant a database. Or it could be used as like a query language. You have really long, really big sets. Like, huge sets and- Because in a programming language, you would never- If you were going to use sets that big, you would probably use a database. Exactly. That's a good way to put it. So, Datafun is like it's a programming language that can work with that's so big that you normally would've used a database for them. Well, that's what we hope it can become. The implementation right now is not like that. You would not use it for anything other than two examples. And part of this is that building a really performing database engine is a lot of work. And I don't... I'm one person with an advisor. And that's not enough people to build a performing database engine. So, the things we can work on are, sort of, the theory, right? Showing that all these optimizations that people use in real database engines and real Datalog implementations can be ported to a functional language like Datalog. And so, I haven't actually finished describing Datalog, right? So, we add sets and set comprehensions, but that only gets you to relational algebra. That only gets you to, sort of, SQL-like levels capability. To do what Datalog can do, you also need a certain sort of recursion, the ability to find sets recursively. Yeah, the join thing we talked about. Yeah, for example, transitive closure. Implementing transitive closure is defined in terms of itself. The fixed point of joins. Yeah, the fixed points. Yeah, so we add a certain sort of fixed point to Datafun that allows you to find sets recursively. And that is what allows you to do the things Datalog can do. And one of the interesting things about this, from sort of an academic point of view that maybe is less interesting to other people, is... Well, actually, let's go back to logic. Paradoxes. Datalog allows you to find things recursively. Now, if you think about logical paradoxes, a lot of them involve self-reference. Like, the liar's paradox, this sentence is false. Or all sets that don't contain themselves also is a paradox. So, you might start to worry if you hear the logic programming language includes recursion. You might start to worry about logical paradoxes. And Datalog does something that prevents you from getting into trouble with logical paradoxes, even though you're allowed to define things recursively. And that thing is stratification. And what it means is you can write things that refer to themselves, but not in a negated way. You can refer to yourself, but you cannot refer to the negation of yourself. And if you look at all logical paradoxes, they all involve, not just self-reference, but negation. The set of all sets that don't contain themselves. The sentence is not true, right? So, that is what avoids Datalog getting into hot water. Can avoid having things that have no clear meaning. The equivalent of that, when you move to a functional language and you start allowing yourself to define things recursively as a fixed point, is that the function you're taking a fixed point of has to be monotone. An increasing input must yield an increasing, or least non-decreasing, This is because negation is, sort of, the fundamental non-monotone logical operator. Increases input from false to true and its output decreases from true to false. Every other logical operator increases inputs and its outputs increase. And make some of its inputs true, the output can only become true, right? Or same thing, right? So, not is the fundamental non-monotone logical operator. So, there's this connection between monotonicity and defining things recursively without screwing yourself over, without being inconsistent or, sort of, computational terms being Turing-complete. And so, we need a type system that guarantees that functions are monotone. And so, that is part of, sort of, the more academic side the Datafun work is a type system for guaranteeing the functions are And this is the modal type stuff? The modal type stuff is like version two of that. So, it's skipping ahead? Yeah. So, the original paper gives a type system that is able to guarantee that certain functions are monotone. And it's somewhat inflexible in certain ways. And so, I've been working on a more flexible version that involves modal types. But it's for the same purpose, effectively. It's for guaranteeing things are monotone. Got it, got it. So, when you do certain operations and functions, it- The effect they have on the order. For example, if you have, you know, expression one set minus expression two. Find the difference between these sets. Well, that is monotone in the left-hand side and the set that you're subtracting from. But it's anti-tone in the right-hand side, the set that you're- But then, if like another operation does the opposite Like subtracts again, then it flips. It tracks the flipping. Yeah, well, the original one only tracks monotonicity and non-monotonicity. The modal type stuff will track monotonicity, non-monotonicity, and titonicity. And I don't care-tonicity, which I... or we call it bivariants. So, it tracks like four different ways an operation can care about the order of its arguments. It can have, say, increasing inputs yield increasing outputs. That's monotone. It can say, "I give you nothing. I give you no guarantees." That's non-monotonicity, right? No guarantees. It can say, "Increasing inputs yield decreasing outputs." That's anti-tonicity or anti-monotonicity. And it can say, "You can change that input however you like. My output will increase." That's not quite right... How could that be? Right. So, the obvious answer is a constant function. Change the input however you like, the output stays the same, which as far as I'm concerned is increasing. So, when I say increasing, I mean weakly increasing. Staying the same or growing. Oh, okay. Sure. I'm like how is things increasing? Weakly increasing. Or you can have a multi-argument function, which is constant in one of its arguments, but not in- Right. But actually, it's a little more subtle. What it actually is is as long as the input changes to something that it is related to, either by increasing or by decreasing, then the output will stay the same or decrease. So, an example of this might be, let's say you have a type which has two copies of the natural numbers in it. So, it has, say, left zero, left one, left two, left three, blah blah blah. And it has right zero, right one, right two, right three, blah blah blah. And the way they're ordered is left zero is below left one is below left two is below left three. And right zero is below right one is below right two is below right three, blah blah blah. But left and rights are never comparable. They're incomparable. So, what the kind of function I'm talking, it's called a bivariant What it does basically is, if you change from left to right or right to left, I give you no guarantees. But as long as you stay within left, my output will not... will stay the same or increase, but it basically means it will stay the same. All right? Or if you stay within right, then it won't stay the same or increase. This comes up, basically nowhere in practice, but it makes certain internals of the system work out. It is, in a certain sense, dual to the "I give you no guarantees" notion. So, anyway, I don't know why I got started talking about this. The notion of like a type system that keeps track in this way is cool. It reminds me of this Conal Elliott quote where he says, "Part of what makes algebra is so great is that it doesn't keep track. Like, four plus three is just seven. And it's like it doesn't matter. Seven isn't four plus three or five plus two. Seven is just seven". And he says, "If seven wasn't equivalent to three plus four or five plus two, if you had to keep track of five plus two, if you would have a tree sort of thing." Algebra would be like a tree thing and it would be less elegant. So, is your thing somehow less elegant? Or you don't have to actually keep track of that far in the past, because each thing is, at any point in time, either monotone or anti-tone. And so, you don't have to remember the past too much, not a tree. So, there is no explicit past here. What this is really tracking is properties and functions. A function is monotone or it's non-monotone or it's anti-tone, or whatever. And the type system can tell you whether it's monotone or anti-tone or non-monotone. I see. And so, plus and minus are functions. Plus and minus are functions. Plus would be monotone. Minus would be monotone in one argument and anti-tone in the other. And so, what you apply plus to... So, once you have a function that uses plus, that function itself just has a tone, as well. That function has a tonality, right. Okay, got you. So, there's no tracking really. It's just using code in the types in the same way. Yeah, it goes into the types, right? So, the type system- Tracks this and analyzes your code according to a certain set of rules. I guess it tracks in the same way that the plus function tracks like its inputs are ints and so its outputs are ints. I see. Okay, cool. That's very cool. It seems obvious now that you say it. It seems like an easy thing to do, now you say it. But I would have never thought to add monotonicity into a type system. Yeah. I mean, we added it, because we needed it to capture Datalog. I wouldn't have thought of it, otherwise. But it turns out, monotonicity has all sorts of strange applications. So, as I said, monotonicity helps you avoid logical paradox, which is sort of why it's helpful here. It allows us to define things recursively while still having a well defined best answer. Monotonicity also shows up in distributed and concurrent systems a bunch though. So, there's this work out from the west coast and Berkeley and other places. Yeah, the BOOM stuff, or specifically the consistency of logical monotonicity work, right? So, I think there's a paper called "consistency as logical monotonicity", or something like that. Sounds like a cool paper. And it says basically that the things that you can implement in a distributed system without any coordination, without having to get nodes to specifically coordinate, are exactly those things which are monotone, in a certain sense. I see. Like a number that can only go like- Yeah, a counter that can only go up. A counter that can only go up or... What's like the other? Yeah, that's like, I guess, the canonical example. Or append-only set. Yeah, pend-only lists. Append-only, but where ordering doesn't matter. Yeah, I mean, yeah. That would work. I think you can also have a pend-only list. It's just a little bit more complicated. But, yeah, and this is sort of connected, in a way that I still don't fully understand, to the CRDT work. And CRDT's are connected to semi-lattices. Because what are CRDTs? Okay. You keep track of a data structure that represents sort of your state at each node. And there is a way to merge these states when you get one from another node. And this operation, it has to be associative. It has to be commutative. It has to be idempotent. And it may as well have a zero value or a starting value. And that makes it a semi-lattice. So funny. I never thought of CRDTs... because the merge function, I never thought of it as a function. But, yeah. Right, because it's often not explicit, right, this merge function. That's not usually the way you implement it. But it is there. When you think about it, it's there in your conception of the underlying data structure. I guess, because it's the flipped perspective. You think of a CRDT as I have an object. You have an object. And we merge the objects. But what's actually happening is the thing I do to my object and the thing you do to your object are then merged. So, it's like the edit actions that are... that you apply the merge operation to. It sort of depends. CRDTs are... There are like a lot of different ways of representing CRDTs. So, the simplest way of representing CRDTs and the way I'm sort of thinking of it as is, sort of, state-based. So, each node keeps a data structure representing its current state. So, for a grow only counter, this is simple. It's the counter, the value of the counter. And then, to synchronize with another node, you would just send it your state. Oh, okay. You just send it your state. You don't send it the diff. You don't send it the diff. You just send it your state. Right. Sending it diffs, is sort of an optimization on top of this. Or at least, that's the way I think of it. But once you do that, the connection to semi-lattices becomes a lot more obscure. If I had the time, I would go investigate this area from the viewpoint of, "Hey, semi-lattices are cool. Can we make sense of all of this, in terms of semi-lattices?" But I don't have enough time. I'm busy doing Datafun. But, yeah, I mentioned this idea to some people, in particular, Martin Kleppmann who has this idea of using Datalog to implement CRDTs. Which I find super cool, right? So, it seems to me that there is some basic science waiting to be done in this area of CRDTs. But I really am not up-to-date on the area. I only have an outsider's We've been doing a lot of talking about math in this conversation. And I'm having a great time. I love math, but I think I've seen people criticize... people in our community. People try to improve programming criticize math or like using math too much. Not criticizing math for its own sake, but people say that we are like... We make our types too complicated. We're trying to incorporate math in ways that over-complicate things and that programming isn't math. And we shouldn't make it math. We should make it its own thing. How would you respond to that criticism? I have somewhat complicated thoughts about this, but... So, the first one, I would think is it definitely can get in the way, if you are trying to learn something quickly and you want to just start, right? So, for people first learning to program, I think that excessive abstraction, including excessive use of type class abstractions, can definitely get in the way. I think, for example, probably we should teach people to program in dynamically type languages, because it is one fewer thing to worry about. I'm not like dogmatic about this. I don't have a strong opinion on this, but that would be my intuition. But I think that these abstractions that create type classes, monads, they come about because they really are recurring patterns in programs. And it really does help, especially for programming in large, to have names for these patterns and ways to reuse them in your code. And how much it helps will depend on what kind of code you're writing and what kind of things you care about. If you really need absolute control of what's going on with all your bits, then abstractions can absolutely get in your way. But a lot of code doesn't need that. And a lot of code, it's more important that you be able to grasp at a high level what you are doing and to articulate that at the level that which you are thinking it, for what you can be able to have abstractions. So, basically, I think abstraction... the level of abstraction at which you work is one part of the trade-off space that is programming. You always have to trade off things against other things. And abstraction pays dividends in the long run, but it requires some investment up front. I think that's a really great lens that it's like a short-term, long-term thing. If you want to just get going... And I think a lot of people who want to improve programming, what they really care about is the onboarding. People should be able to learn to code as fast as possible. Yeah, and I think that is important. And then, there are other people who care about the... once you've learned, how easy it should be. And they're almost opposed. Like, they're almost directly opposed. Like, the easier it is once you know how to do it- Well, I think we perceive them as opposed. This is another one of my bugbears. I think we perceive them as opposed, because they are simply different. Right, and both hard. There are clearly ways to optimize one that is there at the expense of the other. Yes, but there are also ways that optimize both of them. And we don't talk about those, because we just do them. If there's an easy way to improve both things- Of course, of course. I see. Yeah, yeah, yeah. There's three classes- We only talk about the ones that, yeah, yeah, yeah, aren't opposed. Yeah, of course. Yeah, yeah, yeah, of course. Well said. I've forgotten my train of thought. Yeah, yeah. The thing that's good for both beginners and experts are just all programming languages. Yeah, unless they're hard. Unless they're like... Well then it's very clear that you should just change that language. Unless it is hard to do them... Oh, I see. I see. Yeah, yeah, yeah. I thought you said unless it's a hard language, unless it's just a bad language. Yeah, yeah, yeah. So, I interjected to affirm that I really like the idea that math... So, one way to think, instead of using the word math which has all sorts of weird connotations, you could just use the word "patterns" and recognizing patterns. And when you're first introduced to a subject, you don't see the patterns, because everything is new. And so, like to just throw patterns on people right at the beginning, is not a good idea. But the more you do them, then the more you're going to want to use patterns, because you don't want to be repetitive. And so, if you're interested in making programming easy for beginners, then throw away the patterns, because they won't mean anything to them. Or build them in as features of your language, so that they don't have to think about them. All right. So, like don't force people to use go-tos. Just have them use structured for loops, right? That is, in some sense, a pattern. For loops are a pattern that you can express in terms of go-tos, but you can turn that pattern into a language feature, so that you can just think at the higher level So, there are some abstractions that paint evidence immediately, even for beginners like for loops, right? Other ones take a little more effort to learn and maybe, it's not worth trying to introduce a beginner to them. So, you said that you can sort of replace the idea of mathematizing program with the idea of recognizing increasing patterns. But it's also true that there are different sorts of patterns in programming languages, some of which we know how to quantify and exactly capture using math. Some of which are a little fuzzier at the moment, right? And I think there's this more general divide in programming between techniques that are useful when you know exactly what the problem you're solving is. And techniques that are useful when the problem that you're solving is sort of fuzzy and not very clearly defined, but still important, right? Sometimes, you see this as a front-end, backend dichotomy. That's how it manifests, sometimes. But I think it's more to do with how precisely you can define what it is you're trying to do. How like iterative your process is? Are you changing your... What this brings to mind is a waterfall versus agile. Are you a start-up that's iterating really quick or are you some old company that's rewriting existing software do the same exact thing. Yeah, I think that's another way that this same distinction can manifest. And I think, basically, mathematics is most useful in stuff where you know exactly what it is you want. Mathematics is about formalism. It's about being able to precisely specify things. It is most useful when we know what exactly what it is we want and what we're doing. It's less useful when you're doing things that are fuzzier, right?Not not useful, right? And I think a lot of the history of the progress in programming science has been being able to more precisely specify what we want for larger and larger components, right? And that's why I think you sort of see an increasing tide of mathematization in programming languages and programming work, because we are building up from the bottom as it were. Building up from the smaller things towards larger and larger things that we know how to precisely specify what we want. But we also work down from the top. We work from we want to make a system that does this thing involving humans, right? We want to make a video game that is enjoyable. We don't know how to specify that, right? And so, the tension between those two things, I think, produces a lot of this conflict between the people who are really gung-ho about formal methods and mathematization and so on, and the people who are really not gung-ho about that and think that premature formalization can bog you down in details that you don't want to get bogged down in. Yeah, yeah. And part of me wonders if you can have the best of both worlds where you can not get bogged down, but also get the benefits of more formalized stuff. And so, like the way I think that it occurs to me is part of what I like about types is they are automated reminders of like, "Oh, by the way." In certain cases that you didn't handle, things aren't going to go so well. But I don't want the types to stop anything. I just want, on the side, as an afterthought, "Just so you know, if you care to know, there's some things. There's some implications that you may not be realizing." That's what I feel like that could maybe unify into the best of both worlds where it doesn't prevent you, doesn't bog you down, but also doesn't leave you in the dark. Yeah, and I think that's sort of a compelling vision, which is sort of what the gradual typing work is investigating. And Cyrus's work is sort of investigating this as well, with the holes that allow you to have not fully formed programs, or not fully well typed programs, that can still execute. Yeah, I've really had bad experiences with like TypeScript. I don't like it at all. People love TypeScript. Yeah, I know someone who really loves TypeScript. Everybody loves TypeScript and it's my only experience with gradual typing. It's just like so much worse than just programming JavaScript. And I guess to your point, when I program JavaScript, I just want the code to run. I don't care about the cases that you're talk... I don't. Like, stop forcing me to fix these type things. I just want to like iterate. And I'm like, at the end, I want to hear what you have to say about why my types don't make sense. But you're bugging me right now, you know. Interesting. I guess it is true. The gradual typing work is not generally... Cyrus' work is about being able to run incomplete or ill-typed programs with those holes inserted, but most of the gradual typing work is not about that, right? You can have a project program that is not typed and you can still run that program. But if you add the types, they better type check, otherwise, no. Yeah, yeah, yeah. So, anyway, so how does Datafun compare and contrast with the Eve work that you are such a fan of? It's much less ambitious. I'm not trying to reinvent all of programming. I'm just trying to see whether we can combine functional programming and Datalog, right, simple relational programming. So, Eve was trying to be a system which enabled non-programmers, or people who had a lot of programming experience, to build complex, distributed systems. I'm not trying to do that. I'm just trying to combine what I like best about Datalog, right? That it lets you do simple relational stuff without worrying too much about how their data is represented behind the scenes. And then, run it fairly efficiently with what I like about functional programming, which is that I can abstract without repeated patterns in my code. Okay. And in a dream case for you, is it Datafun itself that goes on to be something that people use or more people take the underlying research and embed it into another language? I'm like all for any of these options, right? So like, I think definitely the thing that I think is the most likely dream scenario, right, is that the idea is exploring Datafun will help other people design languages and build systems, right? And I think Datalog has been influential in that way, in the same way relational algebra has been influential in that way. Oh, interesting. Datalog influenced- Datalog, so there's sort of a community of various industrial people who use various Datalog dialects. So, Simmel uses Datalog to do static analysis of large code bases. LogicBlox who recently got acquired and no longer exist in any way that matters, but while they were around, they used Datalog to do business analytics. Yeah, I've heard of LogicBlox from Jamie Brandon who used to be in Eve. Yeah, Eve was influenced by Datalog. So, if they got off the ground, that would be great. And just I don't know. It might seem that way to me, only because I'm doing work on it, but there are people with ideas built around Datalog and cropping up in the research community and I think also leading into certain parts of the industry community, as well. It's still sort of on the fringes. Oh, there's the work out in Berkeley. The BOOM work, right? "Deadalus, datalog in time and space." So, is there a world in which Datafun becomes like a competitor to SQL or a Datalog? Is it kind of equivalent in that way, because I don't enjoy SQL for all the reasons people don't enjoy SQL. But it's like fast and I like relations, so could I like swap out my Postgres database with like a Datafun database? In some hypothetical future, yeah. There are all sorts of questions that would have to be answered along the way. But yeah, and in some hypothetical future, Datafun could be a query language for a database. The questions that would have to be answered along the way are, well, obviously the ones we are still working on. Can we port the existing optimizations work from the database community to work on Datafun? And also, SQL isn't just a language that lets you do queries. It also lets you do updates and, you know, migrations and all sorts of stuff. Yeah, right, migrations, of course. And transactions. Does Datafun- It operates only on an existing database. You can't... There's no- It doesn't talk to an existing database at all. What I'm getting at is it's immunable. You can't like mutate. Yeah. It's like the relational algebra. It's just a query language. It just lets you answer questions about some data that's already exists or write expressions that compute. I see. I see. It feels like it, yeah. Now, it's making me think of the Datomic, the Rich Hickey project. Yeah, although that also deals with change in some way that we don't, right? They have some story about it being sort of a append-only or always keeping your old data around. But we don't have any notion of time at all in Datafun. It's just sets and computing with them. Yeah, what made me think about is they have a query language that feels like Prolog. You can like- Yeah, it's Datalog, effectively, right? They sell it as it's a Datalog dialect, I think. It's proprietary, so I haven't been able to actually look at it properly. I see. Okay, yeah, yeah, yeah. So, that's another thing to add to the list of things inspired by Datalog. Okay. So, in theory, the Datafun work could be a competitor or is it an alternative? Or it's like in the same space as Datamic. Yeah, in the same space as Datalog or as SQL. But again, there are a bunch of questions, practical questions, that will have to be answered first. Yeah, so, incremental computation is when you change the inputs to a function. You're like reusing some of the old... I'm sorry. Let me ask it again. So, maybe give us a foundation of what incremental computation is. I mean, incremental computation is just the idea, basically. You run a function once. You change the input. You want to know what the functions result on the change of the input is, how can you compute that more efficiently than just recomputing the function from scratch on a new input, right? That's one way of thinking about it, right? And so, from a programming experience perspective, what I'm curious about is a slightly different perspective on incremental computing. If I keep the inputs the same, I slightly change the function. What can I reuse from the past computation? That's, in some sense, the same question in that you have the function that takes the program and the input and gives you the result, and you're changing one of the inputs to that function, mainly the program. But it's a much harder problem, because programs are really complicated structured things. And so, you're kind of taking the derivative of the input. So, we're taking the derivative of a program. Yeah. So, one perspective on incremental computation is that you are taking the "derivative" of the program, or the function, that you want to compute incrementally. Because you want to know how does this function change as its input changes. And that's actually related to the derivatives of fixed point is the fixed point of its derivative. Basically, I found a rule for, in a certain system of incremental computation, finding the derivative fixed point. And it turned out to be involved taking the fixed point of the derivative of the function that you were originally taking the fixed point And so, what it was useful for is making Datafun go faster, because Datafun allows you to compute fixed points. And fixed points can be computed naively just by taking the function you want in the fixed point of and repeatedly applying it, smacking it on the data until its output equals its input. This is kind of annoying, because you have the same function and its input is changing, right? First, its input is, you know, the empty set. Then, you take its output and you use it as its input. So, you have changed its input from the empty set to whatever the template was. And then, you reapply it. And you keep on doing that. And really what you want to do is incrementally recompute the results of the function, because its input has just changed. It happens to be the thing you changed it to was its old output. That's the weird part about fixed points, but it's really just incrementally recomputing the functions. And that's where incremental computation and fixed points and derivatives all intersect. I see. Cool. If you want to make fixed points more efficient, you can use incremental computation. And then, it's really important that the fixed point of derivatives is the derivative of the fixed It's important, if your fixed point... if you nest fixed points. Because, for example, if I have a function that I want to save the fixed point of and the function defines the terms of the fixed point, I need the derivative of the function to compute the outer fixed point incrementally and the fixed points function itself involves a fixed point, so I need to know how that fixed point Okay, I'm going to have to listen to this a couple times post-production to get it. But that sounds... Apologies to anybody listening at double speed. Just hit the rewind button a few times. Okay, thank you so much for taking the time. This was a lot of fun.
{"url":"https://futureofcoding.org/episodes/040","timestamp":"2024-11-14T14:59:48Z","content_type":"text/html","content_length":"149835","record_id":"<urn:uuid:9ac92645-87ac-4da7-8600-57898804092c>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00246.warc.gz"}
What is the difference between enanglement and classical correlations? - Temple of Wisdom In classical physics, the concept of correlation refers to the statistical relationship between two or more variables. Correlation implies that two variables are related or linked in some way, but it does not necessarily imply any causal connection between them. For example, the price of ice cream and the number of swimming pool accidents may be correlated, but it does not mean that the price of ice cream causes swimming pool accidents or vice versa. In quantum mechanics, the concept of correlation is fundamentally different from classical physics due to the phenomenon of entanglement. Entanglement is a quantum mechanical phenomenon that occurs when two or more quantum systems become correlated in a way that is not possible in classical physics. When two or more quantum systems are entangled, the state of one system is correlated with the state of the other system in such a way that their joint state cannot be described by a simple combination of the states of the individual systems. One way to understand the difference between classical correlations and entanglement is to consider a simple example of two quantum particles, such as electrons or photons. In classical physics, the state of a system is described by a set of variables, such as position and velocity, that can be measured independently of each other. In quantum mechanics, however, the state of a system is described by a wave function that encodes all possible outcomes of a measurement. When two particles are entangled, their wave functions are linked in such a way that the outcome of a measurement on one particle is correlated with the outcome of a measurement on the other particle, even if the particles are separated by large distances. To illustrate this point, consider the following thought experiment known as the Einstein-Podolsky-Rosen (EPR) paradox. In this experiment, two particles, such as photons, are produced in a way that ensures they are entangled. The entangled photons are then sent to two distant locations, labeled A and B. At each location, a measurement is performed on one of the photons, and the outcome of the measurement is recorded. According to quantum mechanics, the outcome of a measurement on an entangled particle is not determined until the measurement is performed. However, once a measurement is performed on one particle, the state of the other particle becomes correlated with the outcome of the first measurement. This means that the outcome of the second measurement is not random, but is determined by the outcome of the first measurement. In other words, the two particles are correlated in a way that cannot be explained by classical physics. One of the key differences between classical correlations and entanglement is that classical correlations can be explained by local hidden variables, whereas entanglement cannot. Local hidden variables are hypothetical variables that describe the properties of a system in a way that is consistent with both classical physics and the statistical predictions of quantum mechanics. However, the existence of entanglement implies that local hidden variables cannot fully describe the state of a quantum system. This was proven by John Bell in 1964, who derived a mathematical inequality that can be violated by entangled particles but not by classical systems with local hidden variables. Another important difference between classical correlations and entanglement is that classical correlations can be shared between many particles, whereas entanglement is a property that is specific to two or more particles. This is because entanglement arises from the non-separability of the joint state of the entangled particles, whereas classical correlations can be described by a joint probability distribution that is separable. In conclusion, the difference between entanglement and classical correlations lies in the nature of the correlation itself. Classical correlations arise from statistical relationships between variables, whereas entanglement arises from the non-separability of the joint state of two or more quantum systems. This fundamental difference has important implications for our understanding of the nature of reality, and has important practical applications in areas such as quantum information processing and quantum communication. In particular, entanglement has been proposed as a resource for quantum information processing tasks such as quantum teleportation, quantum cryptography, and quantum computing. For example, in quantum teleportation, the quantum state of one particle can be transmitted to another particle that is entangled with it, without the need for a physical transfer of the particle itself. Similarly, in quantum cryptography, entangled particles can be used to generate shared secret keys that are provably secure against eavesdropping. In addition, entanglement has been studied as a potential means of improving the performance of sensors and measurement devices. For example, entangled particles have been used to improve the sensitivity of interferometers, which are used for measuring small displacements, by reducing the noise due to quantum fluctuations. In summary, the difference between entanglement and classical correlations is a fundamental one that arises from the non-separability of the joint state of entangled quantum systems. While classical correlations can be explained by statistical relationships between variables, entanglement cannot be explained by local hidden variables and has no classical analog. This difference has important implications for our understanding of the nature of reality and has led to the development of new technologies such as quantum information processing and quantum sensors.
{"url":"https://blog.antalyatv.com/templeofwisdom/what-is-the-difference-between-enanglement-and-classical-correlations/","timestamp":"2024-11-10T06:31:11Z","content_type":"text/html","content_length":"91084","record_id":"<urn:uuid:f648e7ba-c3bb-4a1e-bf49-f2ebc8b2b632>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00134.warc.gz"}
Multi-Period Binomial Model - KZHU.ai π Multi-period binomial model Multi-period binomial model is really just a series of one-period model spliced together. When pricing an European option, you can calculate it backward 1 period at a time. But you may also do the same thing just as one calculation. It appears the true probabilities p (price going up) and 1-p (price going down) does not matter. Only the risk-neutral probabilities q and 1-q matter. Actually you should NOT expect to see 2 securities with extremely different true probabilities. Option pricing theory simply states that if you have 2 securities in the economy then they will have the same option price. Another interesting observation is the gross risk-free interest rate R increased, the option price also increases. This is totally against what you would see in a deterministic world. In a deterministic world, when interest rate increased, you decrease the present value of the cash flow. Recall our analysis of the binomial model, “no arbitrage” is equivalent to d < R < u. Any derivative security with time T payoff C[T] can be priced using this function C[0] = (1 / R^n) * E^Q[0] [C [T]], where q > 0, 1-q > 0 and n is the number of periods. This representation is actually more general. This is first fundamental theorem of asset pricing: • In fact for any model, if there exists a risk-neutral distribution, such that the equation holds, then arbitrage can NOT exist. • The reverse is also true, if there is no-arbitrage, then a risk-neutral distribution exists. Pricing American Options We can price American options in the same way as European options, but now must also check if it is optimal to early exercise at each node. NOTE, recall that it is never optimal to early exercise an American call options on non-dividend paying stock, so consider put options. Self-Financing Trading Strategies It is a trading strategy ΞΈ[t] – (x[t], y[t]) where changes in V[t] are due entirely to trading gains or losses, rather than the addition or withdrawal of cash funds. The definition states that the value of a self-financing portfolio just before trading is equal to the value of the portfolio just after trading, so there is no funds have been deposited or withdrawn. Risk-Neutral Price = Price of Replicating Strategy Dynamic replication: you are using a trading strategy which adjusts the holdings in the stock and cash account at each time, so that at maturity we replicate the payoff of the option. In the multi-period mode we can do the same as what we have done in single-period model. We can construct a self-financing trading strategy that replicates the payoff of the option. The initial cost of this replicating strategy must equal to the value of the option, otherwise there is an arbitrage opportunity. The dynamic replication price is of course equal to the price obtained from the risk-neutral probabilities and working backwards in the lattice. At any node, the value of the option is equal to the value of the replicating portfolio at that node. Including Dividends Consider again 1-period model and assume stock pays 1 proportional dividend of cS[0] at t = 1. Now no-arbitrage condition is d + c < R < u + c. We can again use 1-period replicating strategy to replicate the payoff of this option. In multi-period binomial model, we can assume a proportional dividend in each period. Each embedded 1-period model has identical risk-neutral probabilities. We can view the i-th dividend as a separate security. Then the owner of underlying security owns a ‘portfolio’ of securities at time = 0. Pricing Forwards and Futures in binomial model When you buy a forward contracts, no money changes hands, and in fact initial value of forward contract is zero. The so-called forward price G[0] = E[0]^Q[S[n]] is used to determine the payoff at the maturity of the forward contract. If the underlying security pays the dividend, it goes into the risk-neutral probabilities Q. The presence of dividend will make the forward contract a little The fair value of a future contract at any time is actually zero. When price of a future contract F[t] is really used to determine the payoff of owning it. F[n] = S[n] by definition. F[t] is not how much you need to pay when buying or how much you receive when selling. A future contact always cost nothing. F[t] is only used to determine the cash-flow associated with holding the contract. Β±(F[t] – F[t-1]) is the payoff received at time t from a long (+) or short (-) position of one contract held between t-1 and t. So a future contact is always worth zero, but pays “dividend” F[t ]– F[t-1] at time t. The price of futures is F[0] = E[0]^Q[S[n]]. The price of forward contracts and futures contacts is equal, even though they are different contracts. The futures marks to market everyday, there is ‘dividend’ payoff everyday, where as the forwards pay nothing everyday until maturity. But they actually have the same price F[0] = G[0]. This is ONLY try in binomial model, because interest rates are deterministic and R^n can be eliminated. In general interest rates are random. Black-Scholes Formula The Black-Scholes formula is of great importance in industry, and we can view the binomial model as approximation to the Black-Scholes formula. Notice the ΞΌ (drift of Geometric Brownian motion) does not appear in the Black-Scholes formula, which is similar to the p (real probability) of option pricing formulas in binomial model. Once we know the price of European call option, we can calculate the price of European put option by using put-call parity (with dividend yield c). Industry practitioners use Black-Scholes formula to quote the price of options. Binomial model is often used as approximation to it. But one needs to calibrate the binomial model by translating Black-Scholes formula parameters into the Binomial model parameters. Pricing a European put on future contract Many of the most liquid options are options on futures contracts. Trading lots of stock indices are expensive and time consuming, so we don’t trade these indices, we trade futures of these indices. In practice we don’t need a model to price liquid options, actually demand and supply determine the price of options. This amounts to determining the implied volatility Ο . The model helps us in two ways: first, it determines the exotic and more illiquid securities whose price are not available in the markets; second, it help hedge options. My Certificate For more on multi-period binomial model, please refer to the wonderful course here https://www.coursera.org/learn/financial-engineering-1 Related Quick Recap I am Kesler Zhu, thank you for visiting. Check out all of my course reviews at https://KZHU.ai
{"url":"https://kzhu.ai/multi-period-binomial-model/","timestamp":"2024-11-08T20:30:02Z","content_type":"text/html","content_length":"187715","record_id":"<urn:uuid:307600c1-140c-4d04-a3de-80000b84855f>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00829.warc.gz"}
re:Re: Re: st: Direction of the effect of the cluster command on the Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org. [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] re:Re: Re: st: Direction of the effect of the cluster command on the From Christopher Baum <[email protected]> To <[email protected]> Subject re:Re: Re: st: Direction of the effect of the cluster command on the Date Thu, 6 Jan 2011 09:52:39 -0500 Austin wrote Useful to think of super-obs but not quite right. If you have 50 clusters and 100 regressors (with a few thousand obs) but you are only interested in testing one coefficient, you will typically be fine, i.e. you will have negligible bias in the SE thus getting correct inference on average with the CRSE, and it may often be the case that no alternative approach gets you correct inference (except resampling clusters for a cluster-robust bootstrap). So estimating a regression with 50 obs and 100 coefficients is not quite the right analogy--more useful to think of the "effective" sample size as between M (number of clusters) and N (number of obs), computable using "roh" per Kish, L. (1965), Survey Sampling, New York: Wiley (note that the CRSE is also the standard svy estimator). Quite so, Austin; unless you are interested in all the coefficients in a regression, you may not be that concerned about the number of 'super-observations'. The effective sample size is indeed a more useful construct. However it should be noted, for those not that familiar with cluster-robust VCEs, that Stata uses the number of 'super-observations' minus 1 when it reports test statistics. For instance, webuse grunfeld reg invest mvalue kstock time, clu(company) reports an ANOVA F-stat based on 3 and 9 df, where 9 is 10 companies - 1. Likewise, the t-stat pvals are those for 9 df. This is rather important for the original poster of this thread, who was working with 4 clusters (and 3 denom. d.f. in the F, and 3 d.f. in the t). Kit Baum | Boston College Economics & DIW Berlin | http://ideas.repec.org/e/pba1.html An Introduction to Stata Programming | http://www.stata-press.com/books/isp.html An Introduction to Modern Econometrics Using Stata | http://www.stata-press.com/books/imeus.html * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"https://www.stata.com/statalist/archive/2011-01/msg00161.html","timestamp":"2024-11-09T09:09:40Z","content_type":"text/html","content_length":"10899","record_id":"<urn:uuid:fcd19cb6-51e7-46d1-b705-b01aefcd79e7>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00436.warc.gz"}
School of Mathematical and Statistical Sciences Faculty Publications and Presentations Submissions from 2019 Fast equilibration dynamics of viscous particle-laden flow in an inclined channel, Jeffrey Wong, Michael R. Lindstrom, and Andrea L. Bertozzi Solutions of evolutionary equation based on the anisotropic variable exponent Sobolev space, Huashui Zhan and Zhaosheng Feng Submissions from 2018 On the origin of crystallinity: a lower bound for the regularity radius of Delone sets, Igor A. Baburin, Mikhail M. Bouniaev, Nikolay Dolbilin, Nikolay Yu. Erokhovets, Alexey Garber, Sergey V. Krivovichev, and Egon Schulte Linear Stability Analysis with Solution Patterns due to Varying Thermal Diffusivity for a Convective Flow in a Porous Medium, Dambaru Bhatta Solution of mathematical model for gas solubility using fractional-order Bhatti polynomials, Muhammad I. Bhatti, Paul Bracken, Nicholas Dimakis, and Armando Herrera The Local Theory for Regular Systems in the Context of t-Bonded Sets, Mikhail M. Bouniaev and Nikolay Dolbilin A Formulation of L-Isothermic Surfaces in Three-Dimensional Minkowski Space, Paul Bracken Cartan frames and algebras with links to integrable systems differential equations and surfaces, Paul Bracken Using restrictions to accept or reject solutions of radical equations, Eleftherios Gkioulekas Covering a Ball by Smaller Balls, Alexey Glazyrin Series for 1/π of level 20, Timothy Huber, Daniel Schultz, and Dongxi Ye The Development of Secondary Mathematics Teachers’ Pedagogical Identities in the Social Context of Classroom Interactions, Hyung Won Kim Analogs of Steiner’s porism and Soddy’s hexlet in higher dimensions via spherical codes, Oleg R. Musin Optimal quantizers for some absolutely continuous probability measures, Mrinal Kanti Roychowdhury Non-Stationary Platform Inverse Synthetic Aperture Radar Maneuvering Target Imaging Based on Phase Retrieval, Hongyin Shi, Saixue Xia, Qi Qin, Ting Yang, and Zhijun Qiao ISAR Autofocus Imaging Algorithm for Maneuvering Targets Based on Phase Retrieval and Gabor Wavelet Transform, Hongyin Shi, Ting Yang, and Zhijun Qiao PRECONDITIONING METHODS FOR THIN SCATTERING STRUCTURES BASED ON ASYMPTOTIC RESULTS, Josef A. Sifuentes and Shari Moskow Traveling wave solutions of a nonlocal dispersal predator–prey model with spatiotemporal delay, Zhihong Zhao, Rui Li, Xiangkui Zhao, and Zhaosheng Feng Submissions from 2017 Using Technology to Determine Factorability or Non-factorability of Quadratic Algebraic Trinomials, John E. T. Bernard, Olga Ramirez, and Cristina Villalobos A class of transformations of a quadratic integral generating dynamical systems, Paul Bracken A geometric formulation of Lax integrability for nonlinear equationsin two independent variables, Paul Bracken An Intrinsic Characterization of Bonnet Surfaces Based on a Closed Differential Ideal, Paul Bracken An Introduction to Ricci Flow for Two-Dimensional Manifolds, Paul Bracken Applications of the lichnerowicz Laplacian to stress energy tensors, Paul Bracken Spectral Theory of Operators on Manifolds, Paul Bracken Yang Mills Theories, Paul Bracken Conversion of saline water and dissolved carbon dioxide into value-added chemicals by electrodialysis, Saad Dara, Michael R. Lindstrom, Joseph English, Arman Bonakdarpour, Brian Wetton, and David P. Quantization for Uniform Distributions on Equilateral Triangles, Carl P. Dettmann and Mrinal Kanti Roychowdhury Frequency of Nonalcoholic Fatty Liver Disease and Subclinical Atherosclerosis Among Young Mexican Americans, Clarence Gill, Kristina Vatcheva, Jen-Jung Pan, Beverly Smulevitz, David D. McPherson, Michael Fallon, Joseph B. McCormick, Susan P. Fisher-Hoch, and Susan T. Laing On the denesting of nested square roots, Eleftherios Gkioulekas Weierstrass Interpolation of Hecke Eisenstein Series, Timothy Huber and Matthew Levine An answer to a question of A. Lubin: The lifting problem for commuting subnormals, Sang Hoon Lee, Woo Young Lee, and Jasang Yoon Assessment of the effects of azimuthal mode number perturbations upon the implosion processes of fluids in cylinders, Michael R. Lindstrom Electric ion dispersion as a new type of mass spectrometer, Michael R. Lindstrom, Iain Moyles, and Kevin Ryczko A comparison of Fick and Maxwell–Stefan diffusion formulations in PEMFC gas diffusion layers, Michael R. Lindstrom and Brian Wetton Multi-Type Branching Processes Modeling of Nosocomial Epidemics, Zeinab Mohamed and Tamer Oraby Cybersecurity: Time Series Predictive Modeling of Vulnerabilities of Desktop Operating System Using Linear and Non-Linear Approach, Nawa Raj Pokhrel, Hansapani Rodrigo, and Chris P. Tsokos On the finite W-algebra for the Lie superalgebra Q(N) in the non-regular case, Elena Poletaeva and Vera Serganova A Refined Approach for Non-Negative Entire Solutions of Δ u + up = 0 with Subcritical Sobolev Growth, John Villavert Solutions of evolutionary p(x)p(x) -Laplacian equation based on the weighted variable exponent space, Huashui Zhan and Zhaosheng Feng Submissions from 2016 Geometrical Problems Related to Crystals, Fullerenes, and Nanoparticle Structure, Mikhail M. Bouniaev, Nikolay Dolbilin, Oleg R. Musin, and Alexey S. Tarasov An Application of the Spectral Theorem To The Laplacian on a Riemannian Manifold, Paul Bracken Harmonic Maps Surfaces and Relativistic Strings, Paul Bracken Quantum Dynamics, Entropy and Quantum Versions of Maxwell’s Demon, Paul Bracken A polyhedral model of partitions with bounded differences and a bijective proof of a theorem of Andrews, Beck, and Robbins, Felix Breuer and Brandt Kronholm From Exam to Education: The Math Exam/Education Resources, Carmen Bruni, Christina Koch, Bernhard Konrad, Michael R. Lindstrom, Iain Moyles, and Will Thompson The complete classification of five-dimensional Dirichlet–Voronoi polyhedra of translational lattices, Mathieu Dutour Sikiric, Alexey Garber, Achill Schürmann, and Clara Waldmann The Voronoi functional is maximized by the Delaunay triangulation in the plane, Herbert Edelsbrunner, Alexey Glazyrin, Oleg R. Musin, and Anton Nikitenko Liver and other Gastrointestinal Cancers are frequent in Mexican Americans, Ariana L. Garza, Kristina Vatcheva, Jen-Jung Pan, Mohammad H. Rahbar, Michael Fallon, Joseph B. McCormick, and Susan P. Multilocality and fusion rules on the generalized structure functions in two-dimensional and three-dimensional Navier-Stokes turbulence, Eleftherios Gkioulekas Generalized reciprocal identities, Timothy Huber and Daniel Schultz STABILITY ANALYSIS AND HOPF BIFURCATION OF DENSITY-DEPENDENT PREDATOR-PREY SYSTEMS WITH BEDDINGTON-DEANGELIS FUNCTIONAL RESPONSE, Xin Jiang, Zhikun She, and Zhaosheng Feng Cirrhosis and Advanced Fibrosis in Hispanics in Texas: The Dominant Contribution of Central Obesity, Jingjing Jiao, Gordon P. Watt, MinJae Lee, Mohammad H. Rahbar, Kristina Vatcheva, Jen-Jung Pan, Joseph B. McCormick, Susan P. Fisher-Hoch, Michael Fallon, and Laura Beretta Student Understanding of Symbols in Introductory Statistics Courses, Hyung Won Kim, Tim Fukawa-Connelly, and Samuel A. Cook Existence of positive solutions to semilinear elliptic systems with supercritical growth, Congming Li and John Villavert Effect of Surface Roughness on the Magnetic Field Profile in the Meissner State of a Superconductor, Michael R. Lindstrom, Alex C. Y. Fang, and Robert F. Kiefl Generalizations of Tucker–Fan–Shashkin Lemmas, Oleg R. Musin KKM type theorems with boundary conditions, Oleg R. Musin Depression in Mexican Americans with Diagnosed and Undiagnosed Diabetes, Rene L. Olvera, Susan P. Fisher-Hoch, Douglas E. Williamson, Kristina Vatcheva, and Joseph B. McCormick Expert elicitation on the uncertainties associated with chronic wasting disease, Michael G. Tyshenko, Tamer Oraby, Shalu Darshan, Margit Westphal, Maxine C. Croteau, Willy Aspinall, Susie Elsaadany, Daniel Krewski, and Neil Cashman Multicollinearity in Regression Analyses Conducted in Epidemiologic Studies, Kristina Vatcheva, MinJae Lee, Joseph B. Mccormick, and Mohammad H. Rahbar The Effect of Ignoring Statistical Interactions in Regression Analyses Conducted in Epidemiologic Studies: An Example with Survival Analysis Using Cox Proportional Hazards Regression Model, Kristina Vatcheva, Joseph B. Mccormick, and Mohammad H. Rahbar Variable Moving Average Transform Stitching Waves, Vesselin Vatchev Asymptotic and optimal Liouville properties for Wolff type integral systems, John Villavert Riemann-Hilbert approach for the FQXL model: A generalized Camassa-Holm equation with cubic and quadratic nonlinearity, Zhen Wang and Zhijun Qiao Hepatitis C Virus in Mexican Americans: a population-based study reveals relatively high prevalence and negative association with diabetes, Gordon P. Watt, Kristina Vatcheva, Laura Beretta, Jen-Jung Pan, Michael Fallon, Joseph B. McCormick, and Susan P. Fisher-Hoch The Precarious Health of Young Mexican American Men in South Texas, Cameron County Hispanic Cohort, 2004–2015, Gordon P. Watt, Kristina Vatcheva, Derek M. Griffith, Belinda M. Reininger, Laura Beretta, Michael Fallon, Joseph B. McCormick, and Susan P. Fisher-Hoch EXISTENCE AND BEHAVIOR OF POSITIVE SOLUTIONS TO ELLIPTIC SYSTEM WITH HARDY POTENTIAL, Lei Wei, Xiyou Cheng, and Zhaosheng Feng Metabolic Health Has Greater Impact on Diabetes than Simple Overweight/Obesity in Mexican Americans, Shenghui Wu, Susan P. Fisher-Hoch, Belinda M. Reininger, Kristina Vatcheva, and Joseph B. Submissions from 2015 An Application of Prolongation Algebras to Determine Bäcklund Transformations for Nonlinear Equations, Paul Bracken Connections of zero curvature and Bäcklund transformations, Paul Bracken Schrödinger Equation for a Particle on a Curved Space and Superintegrability, Paul Bracken Some Results for the Hodge Decomposition Theorem in Euclidean Three-Space, Paul Bracken Undiagnosed Diabetes and Pre-Diabetes in Health Disparities, Susan P. Fisher-Hoch, Kristina Vatcheva, Mohammad H. Rahbar, and Joseph B. McCormick Symmetries of monocoronal tilings, Dirk Frettlöh and Alexey Garber Subclinical Atherosclerosis and Obesity Phenotypes Among Mexican Americans, Susan T. Laing, Beverly Smulevitz, Kristina Vatcheva, Mohammad H. Rahbar, Belinda M. Reininger, David D. McPherson, Joseph B. McCormick, and Susan P. Fisher-Hoch Asymptotic Analysis of a Magnetized Target Fusion Reactor, Michael R. Lindstrom Investigation into Fusion Feasibility of a Magnetized Target Fusion Reactor: A Preliminary Numerical Framework, Michael R. Lindstrom, Sandra Barsky, and Brian Wetton Barriers to disaster preparedness among medical special needs populations, Leslie Meyer, Kristina Vatcheva, Stephanie Castellanos, and Belinda M. Reininger Quantization coefficients in infinite systems, Eugen Mihailescu and Mrinal Kanti Roychowdhury Use of Cubic B-Spline in Approximating Solutions of Boundary Value Problems, Maria Mungia and Dambaru Bhatta Optimal Packings of Congruent Circles on a Square Flat Torus, Oleg R. Musin and Anton Nikitenko Depression, Obesity, and Metabolic Syndrome: Prevalence and Risks of Comorbidity in a Population-Based Study of Mexican Americans, Rene L. Olvera, Douglas E. Williamson, Susan P. Fisher-Hoch, Kristina Vatcheva, and Joseph B. McCormick Bounded rationality alters the dynamics of paediatric immunization acceptance, Tamer Oraby and Chris T. Bauch Non-communicable diseases and preventive health behaviors: a comparison of Hispanics nationally and those living along the US-Mexico border, Belinda M. Reininger, Jing Wang, Susan P. Fisher-Hoch, Alycia Boutte, Kristina Vatcheva, and Joseph B. Mccormick Association of Total and Differential White Blood Cell Counts to Development of Type 2 Diabetes in Mexican Americans in Cameron County Hispanic Cohort, Kristina Vatcheva, Susan P. Fisher-Hoch, Mohammad H. Rahbar, MinJae Lee, Rene L. Olvera, and Joseph B. McCormick Critical Controlled Branching Processes And Their Relatives, George Yanev Submissions from 2014 On cubic multisections of Eisenstein series, Andrew Alaniz and Timothy Huber Solution of Fractional Harmonic Oscillator in a Fractional B-poly Basis, Muhammad I. Bhatti Lessons Learned in Establishing STEM Student Cohorts at a Border University and the Effect on Student Retention and Success, Mikhail M. Bouniaev, Immanuel Edinbarough, and Bill W. Elliott Connections of zero curvature and applications to nonlinear partial differential equations, Paul Bracken Limit distributions of random walks on stochastic matrices, Santanu Chakraborty and Arunava Mukherjea Ill-posedness of the two-dimensional Keller-Segel model in Triebel-Lizorkin spaces, Chao Deng and John Villavert Energy and potential enstrophy flux constraints in quasi-geostrophic models, Eleftherios Gkioulekas A theory of theta functions to the quintic base, Timothy Huber Differential equations for septic theta functions, Timothy Huber and Danny Lara Reconstruction of Structured Quadratic Pencils from Eigenvalues on Ellipses and Parabolas, R. Ibragimov and Vesselin Vatchev Assessing the optimal virulence of malaria‐targeting mosquito pathogens: a mathematical study of engineered Metarhizium anisopliae, Bernhard Konrad, Michael R. Lindstrom, Anja Gumpinger, Jielin Zhu, and Daniel Coombs Mathematical modelling of the effect of surface roughness on magnetic field profiles in type II superconductors, Michael R. Lindstrom, Brian Wetton, and Rob Kiefl Quantization dimension for Gibbs-like measures on cookie-cutter sets, Mrinal Kanti Roychowdhury
{"url":"https://scholarworks.utrgv.edu/mss_fac/index.5.html","timestamp":"2024-11-12T07:29:26Z","content_type":"text/html","content_length":"108228","record_id":"<urn:uuid:665f73db-73c8-4cb6-964f-a52f2b2952c5>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00416.warc.gz"}
The figure (Intro 1 figure) shows the trajectory (i.e., the The figure (Intro 1 figure) shows the trajectory (i Subject:PhysicsPrice:2.85 Bought3 Two other points along the trajectory are indicated in the figure. * One is the moment the ball reaches the peak of its trajectory, at time t_1 with velocity v_1_vec. Its position at this moment is denoted by (x_1, y_1) or (x_1, y_{\max}) since it is at its maximum * The other point, at time t_2 with velocity v_2_vec, corresponds to the moment just before the ball strikes the ground on the way back down. At this time its position is (x_2, y_2), also known as (x_{\max}, y_2) since it is at its maximum horizontal range. Projectile motion is symmetric about the peak, provided the object lands at the same vertical height from which is was launched, as is the case here. Hence y_2 = y_0 = 0\;\rm{m}. What are the values of the intial velocity vector components v_0, x and v_0, y (both in m/s as well as the acceleration vector components a_0, x and a_0, y (both in m/s^2? Here the subscript 0 means "at time t_0." The peak of the trajectory occurs at time t_1. This is the point where the ball reaches its maximum height y_max. At the peak the ball switches from moving up to moving down, even as it continues to travel horizontally at a constant rate. What are the values of the velocity vector components v_1, x and v_1, y (both in m/s as well as the acceleration vector components a_1, x and a_1, y (both in m/s^2? Here the subscript 1 means that these are all at time t_1. If a second ball were dropped from rest from height y_max, how long would it take to reach the ground? Ignore air resistance. Check all that apply. -t_2 - t_1 Which of the following changes would increase the range of the ball shown in the original figure? Check all that apply. -Increase v_0 above 30 \rm{m/s}. -Reduce v_0 below 30 \rm{m/s}. -Reduce theta from 60 \rm{degrees} to 45 \rm{degrees}. -Reduce theta from 60 \rm{degrees} to less than 30 \rm{degrees}. -Increase theta from 60 \rm{degrees} up toward 90 \rm{degrees}. Option 1 Low Cost Option Download this past answer in few clicks 2.85 USD Option 2 Custom new solution created by our subject matter experts
{"url":"https://studyhelpme.com/question/4322/The-figure-Intro-1-figure-shows-the-trajectory-ie-the-path-of-a-ball-undergoing-projectile-mo","timestamp":"2024-11-09T10:11:45Z","content_type":"text/html","content_length":"71410","record_id":"<urn:uuid:6bce40be-3f84-46df-9e30-fa2dc47ab0bf>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00261.warc.gz"}
Finding Optimal Paths on Procedural Strategy Maps in Unreal 4 Previously, on Battlestar Galactica This initial post lays the groundwork introducing Detour, Unreal, the problem space and an initial solution, start there if you’re new to making your own navmeshes in Unreal. It’s followed by a post showing how to make navmeshes from grids. We then moved on to converting a triangulation to a navmesh to reduce the number of polys needed. Sample project available here as always. This was going to be the post where I tied everything together and solved all the problems within UE4’s Detour wrapper, but unfortunately Detour fundamentally has no solution to the A star problems I’ve mentioned, and when given meshes that approximate an actual in game map instead of a toy, both the Recast generated navmesh and my own navmesh give poor enough results that I don’t consider Detour to be suitable for strategy games where the optimal solution is a requirement. On the bright side, having done all the work to understand Detour and the problem space, it was fairly easy to get a different solution in place, even if in the long run it’s likely going to need more maintenance. Detour giving a good but not optimal path Wrapping up the Detour experiments At the end of the previous post I mentioned a couple of things that I hoped could improve Detour’s ability to find optimal paths, so before moving on let’s tackle those. One big tile to avoid all the problems The first idea was that by removing the tile seams, the number of polys is reduced and the number of strangely shaped polys is also reduced since everything is coming straight from the delaunay triangulation. The problem with this is that the verts are converted into a fixed interval coordinate system as mentioned way back in the first post of this series. unsigned short limits total size to 32768 * CellSize so increasing max tile sizes decreases accuracy since we’d have to increase CellSize to compensate. To get this up to 2km, we’d need the cell size increased to about 6 so our accuracy will drop by a factor of 6. This might be OK for some use cases since 1.0 is accurate to the centimetre which is likely overkill in a lot of cases. Fortunately we don’t need any code to do this, just reduce the TileCount to 1 and increase the CellSize. Adding heights The second thought is that having height information might help Detour in assessing whether one triangle is closer than another to the goal. There’s a few different cases that need to be solved here: • Points that lie on triangulation edges and intersect with the tile boundary can use the two points in the triangulation to determine the height of the new point • Corner points that are inside a triangle need to use a barycentric calculation to determine their position. • Everything else is a point already in the triangulation, and therefore already has a height What surprised me after adding heights to the mesh was that the path provided was actually 2D and didn’t include the height changes along the path at all. The string pulling process within Detour that is responsible for taking the path corridor (a list of edges that are crossed) and making an optimal line along that corridor doesn’t include the intermediate points, so the height information is effectively lost. It’s possible to get this information back by storing the corridor along with the path, but it requires writing your own FindPathToLocationSynchronously or equivalent. Our goal here is to set the flag bWantsPathCorridor so that the UNavigationPath returned by FindPath has all the edges that are crossed, and we can combine that with the path segments to project back onto the triangulation and get the heights. Fortunately there’s an enum that will handle setting that flag for us. There’s also one to skip string pulling ( ERecastPathFlags::SkipStringPulling), which would be useful if we wanted to just get the corridor from detour and then find the surface path and do string pulling all in one go, which would be a bit more efficient. FPathFindingResult AManualDetourNavMesh::FindCorridorPathToLocation(FVector Start, FVector Goal, FPathFindingQuery& Query) const Query.NavDataFlags = ERecastPathFlags::GenerateCorridor; return FindPath(Query.NavAgentProperties, Query); Once we have the corridor it’s a matter of walking along the corridor and seeing where the path segments intersect the corridor segments: bool AManualDetourNavMesh::PathCorridorToSurfacePath(UPARAM(ref) const TArray<struct FNavPathPoint>& Path, UPARAM(ref) const TArray<FNavigationPortalEdge>& Corridor,UPARAM(ref) TArray<FVector>& SurfacePath) int32 CurrentPathIdx = 0; int32 NextPathIdx = 1; int32 CurrentEdgeIdx = 0; if (Path.Num() == 0 || Corridor.Num() == 0) return false; SurfacePath.Reset(Path.Num() + Corridor.Num()); while (Path.IsValidIndex(NextPathIdx)) // If there are still corridor edges to check, try that if (Corridor.IsValidIndex(CurrentEdgeIdx)) // If current to next intersects current edge, add intersection point FVector IntersectionPoint; if (FMath::SegmentIntersection2D(Corridor[CurrentEdgeIdx].Left, Corridor[CurrentEdgeIdx].Right, Path[CurrentPathIdx].Location, Path[NextPathIdx].Location, IntersectionPoint)) // otherwise add the next point and move forward else { // If we've checked everything in the corridor, just add next point and move forward else { return (SurfacePath.Num() == Path.Num() + Corridor.Num()); Which gives us a nice surface path like this: 2D Path projected onto the 3D Triangulation using the corridor A better test harness To go onto a slight tangent, I wanted to improve the test harness to look a bit more like a real map, so instead of being an open field it has a set of pseudorandom obstacles. The process for this is • Divide the map into chunks • put an obstacle in each chunk • put a height change in a largeish radius around each obstacle Perspective view of test map Final? Results Unfortunately even with heights added and no tiling, Detour is just not using a reliable heuristic in its A star implementation. Ultimately there is a tradeoff between having an accurate heuristic, which would progressively build a string pulled path, and a low cost heuristic, which does a best effort guess on which neighbouring triangle is closer to the goal. Many months ago I played around with an algorithm that focuses on having an accurate heuristic called Triangulated Polygon A-star (TPAStar), so I ported that across to Unreal to see how it fared. TPAStar navigation result Pretty good, but let’s see what tradeoffs we have to make to get that result. There’s an implementation at https://github.com/grgomrton/tpastar in C# which comes with a handy little test harness. When I initially ported this across to Unreal and tried it on my triangulation, the pathfinder was failing to complete and stuck in an infinite loop. I realised this is because the algorithm cannot handle interior vertices within its triangulation, and sure enough adding triangles to the C# program to creat interior vertices had the same effect there, so it wasn’t that I’d messed up the port. In addition, the implementation is 2D. I could potentially change this but with the restriction of having no interior vertices, there’s not much point having heights anyway. An interior vertex will break the algorithm I modified the algorithm slightly to account for the interior vertex problem by only searching neighbours that are set to pathable, but this still leaves interior vertices where there’s traversable height changes. Lacking a better idea I created a second navmesh without any height information and without any of the height changes, so that TPAStar could work on its own navmesh without interior vertices. It might be tempting to think that with this restriction, Detour would also give optimal results, but the last example I showed of detour has no interior vertices and it still got the path Detour navigation without interior vertices While removing the neighbours that aren’t pathable might seem attractive to save space, the neighbours will be needed later. Surface projection After getting the path from TPAStar using the 2D Triangulation, we can use the regular one with height information to do a similar surface projection procedure to what we did with Detour. We find the start and end triangle using barycentric point in triangle tests, then walk along the path adding points as we intersect segments, and moving to the next triangle as we move across each edge. While it might seem like this is the entire solution, unfortunately it’s very common that there is no edge from one triangle in the path to the next triangle. This may sound impossible, but in fact every time an obstacle is traversed the path will go exactly to one of the points on the triangulation, and from there could go to any triangle connected to that point, not just neighbours of the current triangle. The solution to this is to walk around the point when this situation is detected, and this is where the impassable triangles might be needed. Delaunator stores triangle connections as half edges, so the way to walk around a point is to take an edge that point into it and do this: int32 Incoming = StartEdge; do { int32 Tri = Triangulation.TriangleOfEdge(Incoming); // Move to the next triangle around this point int32 Outgoing = Triangulation.NextHalfedge(Incoming); Incoming = Triangulation.HalfEdges[Outgoing]; } while (Incoming != -1 && Incoming != StartEdge); The following two screenshots show the difference between the two algorithms, with Manual Detour and Recast Detour both giving good but not quite straight paths, and TPAStar giving a perfect result. Recast is not going over the sloped section and is instead treating it as an obstacle, but this isn’t actually the relevant bit to look at - the path isn’t straight in the middle so even if the mesh settings were changed a bit it’s clear that it’s not going to find optimal paths reliably. Top View: Detour with Manual Navmesh or Recast Navmesh vs TPAStar Perspective View: Detour with Manual Navmesh or Recast Navmesh vs TPAStar In a real test within Maladius, the 2D and 3D navmeshes look like this: 2D Triangulation pathfinding mesh 3D pathfinding heightmesh With a few thousand triangles in each navmesh. Pathfinding results seem good! Path test 1 took 0.001 seconds and Path test 2 took just under 0.003 seconds. Further Work The performance can probably be improved quite a bit by pre-allocating space for the double linked lists that TPAStar uses heavily. This would also help with the other obvious improvement which would be moving navigation requests to a worker thread and having a queue to avoid stalling the game thread during times of high load. I think for now I’m going to take a break from pathfinding and do something else. I hope this series of posts is helpful to anyone looking to tinker with Unreal’s Navigation systems.
{"url":"https://maladius.com/posts/manual_detour_navmeshes_4/","timestamp":"2024-11-11T07:58:29Z","content_type":"text/html","content_length":"37552","record_id":"<urn:uuid:ac29f949-e15d-4871-840e-48744d5c5d0b>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00765.warc.gz"}
Properties of fluids, Factors affecting density and pressure | Science onlineProperties of fluids, Factors affecting density and pressure - Science online Properties of fluids, Factors affecting density and pressure Matter can be found in nature in one of three states which are solid, liquid, and gas, Solid materials (like wood and glass) have a definite shape and volume, while liquids and gases (like water and air) have no definite shape but they take the shape of their container, so, they are called fluids. Fluids are materials that can flow and have indefinite shapes, There are two types of fluids: 1. Liquids are characterized by: definite volume, smooth flow, and being incompressible. 2. Gases are characterized by: occupying any space, taking the volume of their container, and can be easily compressed. Properties of fluids We will explain in details some of the physical properties characterizing the fluids which are density and pressure. Density is the mass of the unit volume of the substance or the mass of the body divided by its volume, The density of a material is given by the relation: ρ = m / V[ol] Where: m is the mass of the substance and measured in kg, V[ol ]is the volume of the substance and measured in m^3, consequently, the density is measured in (kg/ m^3), When the density of iron = 7900 kg / m^3, It means that the mass of 1 m^3 of iron = 7900 kg. When mixing two or more materials then: m (mix) = m[1] + m[2] + …… ρ V[ol ]= ρ[1] (V[ol])[1] + ρ[1] (V[ol])[2] + …… V[ol ](mix) = (V[ol])[1] + (V[ol])[2] + …… And if the mixture is diminished: V[ol ](mix) = [ (V[ol])[1] + (V[ol])[2] ] − Δ V[ol] The factors that affect the density Density differs from one material to another because of the differences in: 1. The atomic weight of the element or the molecular weight of the compound. 2. The distance between atoms (Interatomic distances) or molecules (Intermolecular spaces). Density is considered a characteristic property of the material, because it is constant for the same material and does not change as the mass or volume of the material changes at the same temperature , It changes by changing the type of material or changing the temperature because the increase in temperature changes the intermolecular spaces between atoms or molecules and consequently the density Applications of density Indicating how well the battery of the car is charged by measuring the density of the electrolytic solution inside it, When the battery is discharged, the density of its electrolytic solution (diluted sulphuric acid) decreases because of the chemical reaction with the lead plates and the formation of lead sulphate, When the battery is recharged, the sulphate is separated from lead plates and go back to the electrolyte and the density increases again. Diagnosis of some diseases like Anemia by measuring blood density, The normal rate of blood density ranges from 1040 kg/m^3 to 1060 kg/m^3, if blood density exceeds 1060 kg/m^3, this indicates an increase in the concentration of the red blood cells, Blood density preceded 1040 kg/m^3, this indicates a decrease in the concentration of the red blood cells which indicates Anemia. The increase of salt concentration in urine, By measuring the urine density, The normal density of urine is 1020 kg/m^3 and some diseases cause an increase of salts in the urine that increase its Relative density is the ratio between the density of a material to the density of water at the same temperature or it is the ratio between the mass of a certain volume of a material to the mass of the same volume of water at the same temperature. The relative density of a substance can be determined from the relations: Relative density of a substance = Density of material at a certain temperature / Density of water at the same temperature Relative density of a substance = Mass of a certain volume of a material at a certain temperature/Mass of the same volume of water at the same temperature The relative density is dimensionless because it is a ratio between two similar quantities, when the relative density of gasoline is 0.9, It means that the ratio between the density of gasoline to that of water at the same temperature = 0.9. The density of a material can be determined by knowing its relative density using the following relation: ρ[material] = ρ[relaive] × ρ[water] = ρ[relaive]× 1000 (Where: ρ[water] = 1000 kg / m^3) When a force (F) acts on a surface of area (A), pressure (P) is produced on this area, The pressure at a point is the average force acting perpendicularly on the unit area surrounding this point. Force perpendicular to the surface, so, P = F/A, mg/A Force making angle θ with the surface, so, P = F sin θ / A Force making angle θ with the normal to the surface, then, P = F cos θ / A Where: Force (F) is measured in Newton (N) and area (A) is measured in m^2, Thus, pressure is measured in N/m^2 (Pascal) and its equivalent units are kg/m.s^2 or J/m^3. When the pressure at a point = 500 N/m^2, It means that the average force acting perpendicularly on the unit area surrounding this point = 500 N. Factors affecting the pressure at a point: 1. Average force acting perpendicular (F), (directly proportional) P ∝ F at constant A. 2. Area surrounding this point (A), Inversely proportional P∝ (1/A) at constant F. It is clear that as the area increases, the pressure decreases, so, wide tires are used in the heavy trucks, thus, the pressure due to the weight of the truck decreases on the road, So, tires do not sink in the sandy roads. As the area decreases, the pressure increases, So, needles and pins have sharp tips, Thus, higher pressure is produced, so, they penetrate bodies easily. Applications on the pressure 1. Measuring blood pressure 2. Measuring the air pressure inside the tires of a car Measuring blood pressure: A normal person has two values for blood pressure (the contracting pressure and the relaxing pressure), it is said that the person is a blood pressure patient if one of these values has changed. The systolic (contracting) pressure is the maximum value for the blood pressure when the heart muscle contracts and equals 120 torr for a normal person. The diastolic (relaxing) pressure is the minimum value for the blood pressure when the heart muscle relaxes and equals 80 torr for a normal person. When the blood pressure for a normal person is 120/80, It means that the maximum value for blood pressure in the artery when the heart muscle contracts is 120 torr, and the minimum value for blood pressure in the artery when the heart muscle relaxes is 80 torr. Measuring the air pressure inside the tires of a car: The tire of a car is filled with air under a suitable high pressure so that the tangent area between the tire and the road is minimum and consequently the friction decreases which in turn decreases the hot temperature of the tire and vice versa. Elephant foot pressure or man’s? Pressure due to a pointed high heel is greater than that due to an elephant’s foot on the ground because the pressure is inversely proportional to the surface area. Pressure at a point inside a liquid When a liquid is put in a container, every point inside the liquid is affected by the weight of the liquid column which is its height (h), and the area of its base (A) which causes pressure at this Pressure at a point inside a liquid is the weight of the liquid column which its base is the unit area surrounding this point and its height is the vertical distance from this point to the surface of the liquid. When the pressure of a liquid at a point inside it = 2 ×10^6 N/m^2, It means that the weight of the liquid column which is its base is the unit area surrounding this point and its height is the vertical distance between this point and the liquid surface = 2 ×10^6 N. Deduction of the pressure value at a point inside a liquid Imagine plate (X) of area (A) at a depth (h) inside a liquid of density ρ, This plate acts as the base of a column of the liquid, and the force acting on the plate (X) is the weight of the liquid column whose height is (h) and whose cross-section area is (A). The weight of the liquid column (F[g]) is determined by the relation: F[g] = mg Where ( m ) is the mass of the liquid column. m = ρ V[ol] V[ol] = A h ∴ F[g] = Ahρg P = F[g] / A = Ahρg / A ∴ P = ρgh If the liquid surface is open to air then the total pressure at this point: P = P[a] + ρgh, Where P[a] is the atmospheric pressure, The pressure on a body at the bottom of a liquid is perpendicular on each point of its surface. Factors affecting the pressure at a point inside the liquid: 1. Density of the liquid (ρ), directly proportional, P ∝ ρ at constant g and h. 2. Acceleration due to gravity (g), directly proportional, P ∝ g at constant ρ and h, g changes slightly from one place to another. 3. Point depth (h), is directly proportional. It is clear that as the depth (h) increases, the pressure (P) increases where P ∝ h, That is why the base of the dam must be thicker than its top to withstand the increase in pressure at the high depth, When the depth of points below the surface is same and so is the density (ρ), the pressures becomes the same where p = ρgh. All the points at the same horizontal level inside the liquid have the same pressure, that’s why open seas and oceans have one horizontal surface of water, The pressure at a point inside the liquid is a scalar quantity. Applications on the pressure at a point (Connected vessels, U-shaped tube and Mercuric barometer) You must be logged in to post a comment.
{"url":"https://www.online-sciences.com/physics/properties-of-fluids-factors-affecting-density-and-pressure/","timestamp":"2024-11-04T12:33:21Z","content_type":"text/html","content_length":"232000","record_id":"<urn:uuid:b3dbb819-e696-43ad-a649-cdca0580f634>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00068.warc.gz"}
The Stats Guy I wrote last week about how the number of cases of coronavirus were following a textbook exponential growth pattern. I didn’t look at the number of deaths from coronavirus at the time, as there were too few cases in the UK for a meaningful analysis. Sadly, that is no longer true, so I’m going to take a look at that today. However, first, let’s have a little update on the number of cases. There is a glimmer of good news here, in that the number of cases has been rising more slowly than we might have predicted based on the figures I looked at last week. Here is the growth in cases with the predicted line based on last week’s numbers. As you can see, cases in the last week have consistently been lower than predicted based on the trend up to last weekend. However, I’m afraid this is only a tiny glimmer of good news. It’s not clear whether this represents a real slowing in the number of cases or merely reflects the fact that not everyone showing symptoms is being tested any more. It may just be that fewer cases are being So what of the number of deaths? I’m afraid this does not look good. This is also showing a classic exponential growth pattern so far: The last couple of days’ figures are below the fitted line, so there is a tiny shred of evidence that the rate may be slowing down here too, but I don’t think we can read too much into just 2 days’ figures. Hopefully it will become clearer over the coming days. One thing which is noteworthy is that the rate of increase of deaths is faster than the rate of increase of total cases. While the number of cases is doubling, on average, every 2.8 days, the number of deaths is doubling, on average, every 1.9 days. Since it’s unlikely that the death rate from the disease is increasing over time, this does suggest that the number of cases is being recorded less completely as time goes by. So what happens if the number of deaths continues growing at the current rate? I’m afraid it doesn’t look pretty: (note that I’ve plotted this on a log scale). At that rate of increase, we would reach 10,000 deaths by 1 April and 100,000 deaths by 7 April. I really hope that the current restrictions being put in place take effect quickly so that the rate of increase slows down soon. If not, then this virus really is going to have horrific effects on the UK population (and of course on other countries, but I’ve only looked at UK figures here). In the meantime, please keep away from other people as much as you can and keep washing those hands. Covid-19 and exponential growth One thing about the Covid-19 outbreak that has been particularly noticeable to me as a medical statistician is that the number of confirmed cases reported in the UK has been following a classic exponential growth pattern. For those who are not familiar with what exponential growth is, I’ll start with a short explanation before I move on to what this means for how the epidemic is likely to develop in the UK. If you already understand what exponential growth is, then feel free to skip to the section “Implications for the UK Covid-19 epidemic”. A quick introduction to exponential growth If we think of something, such as the number of cases of Covid-19 infection, as growing at a constant rate, then we might think that we would have a similar number of new cases each day. That would be a linear growth pattern. Let’s assume that we have 50 new cases each day, then after 60 days we’ll have 3000 cases. A graph of that would look like this: That’s not what we’re seeing with Covid-19 cases. Rather than following a linear growth pattern, we’re seeing an exponential growth pattern. With exponential growth, rather than adding a constant number of new cases each day, the number of cases increases by a constant percentage amount each day. Equivalently, the number of cases multiplies by a constant factor in a constant time interval. Let’s say that the number of cases doubles every 3 days. On day zero we have just one case, on day 3 we have 2 cases, and day 6 we have 4 cases, on day 9 we have 8 cases, and so on. This makes sense for an infectious disease epidemic. If you imagine that each person who is infected can infect (for example) 2 new people, then you would get a pattern very similar to this. When only one person is infected, that’s just 2 new people who get infected, but if 100 people have the disease, then 200 people will get infected in the same time. On the face of it, the example above sounds like it’s growing much less quickly than my first example where we have 50 new cases each day. But if you are doubling the number of cases each time, then you start to get to scarily large numbers quite quickly. If we carry on for 60 days, then although the number of cases isn’t increasing much at first, it eventually starts to increase at an alarming rate, and by the end of 60 days we have over a million cases. This is what it looks like if you plot the graph: It’s actually quite hard to see what’s happening at the beginning of that curve, so to make it easier to see, let’s use the trick of plotting the number of cases on a logarithmic scale. What that means is that a constant interval on the vertical axis (generally known as the y axis) represents not a constant difference, but a constant ratio. Here, the ticks on the y axis represent an increase in cases by a factor of 10. Note that when you plot exponential growth on a logarithmic scale, you get a straight line. That’s because we’re increasing the number of cases by a constant ratio in each unit time, and a constant ratio corresponds to a constant distance on the y axis. Implications for the UK Covid-19 epidemic OK, so that’s what exponential growth looks like. What can we see about the number of confirmed Covid-19 cases in the UK? Public Health England makes the data available for download here. The data have not yet been updated with today’s count of cases as I write this, so I added in today’s number (1372) based on a tweet by the Department of Health and Social Care. If you plot the number of cases by date, it looks like this: That’s pretty reminiscent of our exponential growth curve above, isn’t it? It’s worth noting that the numbers I’ve shown are almost certainly an underestimate of the true number of cases. First, it seems likely that some people who are infected will have only very mild (or even no) symptoms, and will not bother to contact the health services to get tested. You might say that it doesn’t matter if the numbers don’t include people who aren’t actually ill, and to some extent it doesn’t, but remember that they may still be able to infect others. Also, there is a delay from infection to appearing in the statistics. So the official number of confirmed cases includes people only after they have caught the disease, gone through the incubation period, developed symptoms that were bothersome enough to seek medical help, got tested, and have the test results come back. This represents people who were infected probably at least a week ago. Given that the number of cases are growing so rapidly, the number of people actually infected today will be considerably higher than today’s statistics for confirmed cases. Now, before I get into analysis, I need to decide where to start the analysis. I’m going to start from 29 February, as that was when the first case of community transmission was reported, so by then the disease was circulating within the UK community. Before then it had mainly been driven by people arriving in the UK from places abroad where they caught the disease, so the pattern was probably a bit different then. If we start the graph at 29 February, it looks like this: Now, what happens if we fit an exponential growth curve to it? It looks like this: (Technical note for stats geeks: the way we actually do that is with a linear regression analysis of the logarithm of the number of cases on time, calculate the predicted values of the logarithm from that regression analysis, and then back-transform to get the number of cases.) As you can see, it’s a pretty good fit to an exponential curve. In fact it’s really very good indeed. The R-squared value from the regression analysis is 0.99. R-squared is a measure of how well the data fit the modelled relationship on a scale of 0 to 1, so 0.99 is a damn near perfect fit. We can also plot it on a logarithmic scale, when it should look like a straight line: And indeed it does. There are some interesting statistics we can calculate from the above analysis. The average rate of growth is about a 30% increase in the number of cases each day. That means that the number of cases doubles about every 2.6 days, and increases tenfold in about 8.6 days. So what happens if the number of cases keeps growing at the same rate? Let’s extrapolate that line for another 6 weeks: This looks pretty scary. If it continues at the same rate of exponential growth, we’ll get to 10,000 cases by 23 March (which is only just over a week away), to 100,000 cases by the end of March, to a million cases by 9 April, and to 10 million cases by 18 April. By 24 April the entire population of the UK (about 66 million) will be infected. Now, obviously it’s not going to continue growing at the same rate for all that time. If nothing else, it will stop growing when it runs out of people to infect. And even if the entire population have not been infected, the rate of new infections will surely slow down once enough people have been infected, as it becomes increasingly unlikely that anyone with the disease who might be able to pass it on will encounter someone who hasn’t yet had it (I’m assuming here that people who have already had the disease will be immune to further infections, which seems likely, although we don’t yet know that for sure). However, that effect won’t kick in until at least several million people have been infected, a situation which we will reach by the middle of April if other factors don’t cause the rate to slow down Several million people being infected is a pretty scary prospect. Even if the fatality rate is “only” about 1%, then 1% of several million is several tens of thousands of deaths. So will the rate slow down before we get to that stage? I genuinely don’t know. I’m not an expert in infectious disease epidemiology. I can see that the data are following a textbook exponential growth pattern so far, but I don’t know how long it will Governments in many countries are introducing drastic measures to attempt to reduce the spread of the disease. The UK government is not. It is not clear to me why the UK government is taking a more relaxed approach. They say that they are being guided by the science, but since they have not published the details of their scientific modelling and reasoning, it is not possible for the rest of us to judge whether their interpretation of the science is more reasonable than that of many other European countries. Maybe the rate of infection will start to slow down now that there is so much awareness of the disease and of precautions such as hand-washing, and that even in the absence of government advice, many large gatherings are being cancelled. Or maybe it won’t. We will know more over the coming weeks. One final thought. The government’s latest advice is for people with mild forms of the disease not to seek medical help. This means that the rate of increase of the disease may well appear to slow down as measured by the official statistics, as many people with mild disease will no longer be tested and so not be counted. It will be hard to know whether the rate of infection is really slowing
{"url":"http://www.statsguy.co.uk/2020/03/","timestamp":"2024-11-06T23:44:35Z","content_type":"text/html","content_length":"49771","record_id":"<urn:uuid:a699daa6-802d-4ae1-b9a6-6ebbab0e8723>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00378.warc.gz"}
A162176 - OEIS %I #5 Jul 19 2015 10:22:21 %S 1,40,819,11440,122589,1074488,8020830,52427192,306189025,1622495952, %T 7895219982,35623107520,150221110689,595982725640,2237008815175, %U 7981961442768,27186526166255,88708246063240,278172606877930 %N Number of reduced words of length n in the Weyl group B_40. %C Computed with MAGMA using commands similar to those used to compute A161409. %D J. E. Humphreys, Reflection Groups and Coxeter Groups, Cambridge, 1990. See under Poincaré polynomial. %D N. Bourbaki, Groupes et alg. de Lie, Chap. 4, 5, 6. (The group is defined in Planche II.) %F G.f. for B_m is the polynomial Prod_{k=1..m}(1-x^(2k))/(1-x). Only finitely many terms are nonzero. This is a row of the triangle in A128084. %K nonn %O 0,2 %A _John Cannon_ and _N. J. A. Sloane_, Nov 30 2009
{"url":"https://oeis.org/A162176/internal","timestamp":"2024-11-10T06:23:54Z","content_type":"text/html","content_length":"7201","record_id":"<urn:uuid:4cea2920-1d56-4a50-89db-64d0dc962cdc>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00085.warc.gz"}
Advances in Pure and Applied Mathematics List of Articles Limit sets and global dynamic for 2-D divergence-free vector fields Habib Marzougui The global structure of divergence-free vector fields on closed surfaces is investigated. We prove that if M is a closed surface and $$${V}$$$ is a divergence-free $$$C^{1}$$$-vector field with finitely many singularities on M then every orbit L of $$$\mathcal{V}$$$ is one of the following types: (i) a singular point, (ii) a periodic orbit, (iii) a closed (non periodic) orbit in M* = M - Sing($$$\mathcal{V}$$$), (iv) a locally dense orbit, where Sing($$$\mathcal{V}$$$) denotes the set of singular points of $$$\mathcal{V}$$$. On the other hand, we show that the complementary in M of periodic components and minimal components is a compact invariant subset consisting of singularities and closed (non compact) orbits in M*. These results extend those of T. Ma and S. Wang in [Discrete Contin. Dynam. Systems, 7 (2001), 431-445] established when the divergence-free vector field $$$\mathcal{V}$$$ is regular that is all its singular points are non-degenerate. Weighted estimates for operators associated to the Bergman-Besov kernels David Békollè, Adriel R. Keumo, Edgar L. Tchoundja, Brett D. Wick We characterize the weights for which we have the boundedness of standard weighted integral operators induced by the Bergman-Besov kernels acting between two weighted Lebesgue classes on the unit ball of ℂ^N in terms of Békollè - Bonami type condition on the weights. To accomplish this we employ the proof strategy originated by Békollè. A result on Bruck Conjecture related to Shift Polynomials B. Narasimha Rao, Shilpa N. This paper mainly concerns about establishing the Bruck conjecture for differential-difference polynomial generated by an entire function. The polynomial considered is of finite order and involves the entire function $$$f(z)$$$ and its shift $$$f(z + c)$$$ where $$$c \in ℂ$$$. Suitable examples are given to prove the sharpness of sharing exceptional values of Borel and Nevanlinna. Existence Results for Singular p(x)-Laplacian Equation R. Alsaedi, K. Ben Ali, A. Ghanmi This paper is concerned with the existence of solutions for the following class of singular fourth order elliptic equations $$$ \left\{ \begin{array}{ll} \Delta\Big(|x|^{p(x)}|\Delta u|^{p(x)-2}\Delta u\Big)=a(x)u^{-\gamma (x)}+\lambda f(x,u),\quad \mbox{in }\Omega, \\ u=\Delta u=0, \quad \mbox{on }\partial\Omega. \ end{array} \right.$$$ where $$$\Omega$$$ is a smooth bounded domain in $$$\mathbb{R}^N, \gamma :\overline{\Omega}\rightarrow (0,1)$$$ be a continuous function, $$$f\in C^{1}( \overline{\Omega}\times \mathbb{R}), p:\; \ overline{\Omega}\longrightarrow \;(1,\infty)$$$ and $$$a$$$ is a function that is almost everywhere positive in $$$\Omega$$$. Using variational techniques combined with the theory of the generalized Lebesgue-Sobolev spaces, we prove the existence at least one nontrivial weak solution.
{"url":"https://www.openscience.fr/Issue-3-June-2022","timestamp":"2024-11-02T00:09:54Z","content_type":"text/html","content_length":"34286","record_id":"<urn:uuid:a25fd007-ea39-4673-8c84-ad8f9a1009d7>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00151.warc.gz"}
7.7 Angles, Triangles, and Prisms Lesson 1 • I can find unknown angle measures by reasoning about adjacent angles with known measures. • I can recognize when an angle measures $90^\circ$, $180^\circ$, or $360^\circ$. Lesson 2 • I can find unknown angle measures by reasoning about complementary or supplementary angles. • I can recognize when adjacent angles are complementary or supplementary. Lesson 3 • I can determine if angles that are not adjacent are complementary or supplementary. • I can explain what vertical angles are in my own words. Lesson 4 • I can reason through multiple steps to find unknown angle measures. • I can recognize when an equation represents a relationship between angle measures. Lesson 5 • I can write an equation to represent a relationship between angle measures and solve the equation to find unknown angle measures. Lesson 6 • I can show that the 3 side lengths that form a triangle cannot be rearranged to form a different triangle. • I can show that the 4 side lengths that form a quadrilateral can be rearranged to form different quadrilaterals. Lesson 7 • I can reason about a figure with an unknown angle. • I can show whether or not 3 side lengths will make a triangle. Lesson 8 • I understand that changing which sides and angles are next to each other can make different triangles. Lesson 9 • Given two angle measures and one side length, I can draw different triangles with these measurements or show that these measurements determine one unique triangle or no triangle. Lesson 10 • Given two side lengths and one angle measure, I can draw different triangles with these measurements or show that these measurements determine one unique triangle or no triangle. Lesson 11 • I can explain that when a three dimensional figure is sliced it creates a face that is two dimensional. • I can picture different cross sections of prisms and pyramids. Lesson 12 • I can explain why the volume of a prism can be found by multiplying the area of the base and the height of the prism. Lesson 13 • I can calculate the the volume of a prism with a complicated base by decomposing the base into quadrilaterals or triangles. Lesson 14 • I can find and use shortcuts when calculating the surface area of a prism. • I can picture the net of a prism to help me calculate its surface area. Lesson 15 • I can decide whether I need to find the surface area or volume when solving a problem about a real-world situation. Lesson 16 • I can solve problems involving the volume and surface area of children’s play structures. Lesson 17 • I can build a triangular prism from scratch.
{"url":"https://im-beta.kendallhunt.com/MS/students/2/7/learning_targets.html","timestamp":"2024-11-13T22:56:03Z","content_type":"text/html","content_length":"78947","record_id":"<urn:uuid:3661ef09-d80e-4edb-9d57-5910a62cc41a>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00532.warc.gz"}
Logic seminar: Aris Papadopoulos (University of Leeds) Dates: 13 December 2023 Times: 15:15 - 16:15 What is it: Seminar Organiser: Department of Mathematics Who is it for: University staff, External researchers, Adults, Alumni, Current University students Title: Zarankiewicz’s Problem and Model Theory Abstract: A shower thought that anyone interested in graph theory must have had at some point in their lives is the following: `How “sparse" must a given graph be, if I know that it has no “dense” subgraphs?’. This curiosity definitely crossed the mind of Polish mathematician K. Zarankiewicz, who asked a version of this question formally in 1951. In the years that followed, many central figures in the development of extremal combinatorics contemplated this problem, giving various kinds of answers. Some of these will be surveyed in the first part of my talk. So far so good, but this is a logic seminar and the title says the words “Model Theory"… In the second part of my talk, I will discuss how the celebrated Szemerédi-Trotter theorem gave a starting point to the study of Zarankiewicz’s problem in “geometric” contexts, and how the language of model theory has been able to capture exactly what these contexts are. I will then ramble about improvements to the classical answers to Zarankiewicz’s problem, when we restrict our attention to semilinear/semibounded o-minimal structures, Presburger arithmetic, and various kinds of Hrushovski The new results that will appear in the talk were obtained jointly with Pantelis Eleftheriou. Travel and Contact Information Find event Frank Adams 1 (and zoom, link in email) Alan Turing Building
{"url":"https://events.manchester.ac.uk/event/event:d2dc-lpi6g32v-drlxnq/logic-seminar-aris-papadopoulos-university-of-leeds","timestamp":"2024-11-02T20:55:28Z","content_type":"text/html","content_length":"18637","record_id":"<urn:uuid:8b40d571-c0e7-46d8-8913-db3ae057d9e1>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00246.warc.gz"}
29Si NMR chemical shifts of silane derivatives We compute the ground-state energy of atoms and quantum dots with a large number N of electrons. Both systems are described by a nonrelativistic Hamiltonian of electrons in a d-dimensional space. The electrons interact via the Coulomb potential. In the cas ...
{"url":"https://graphsearch.epfl.ch/en/publication/112843","timestamp":"2024-11-06T06:25:45Z","content_type":"text/html","content_length":"101899","record_id":"<urn:uuid:1359f124-41ed-4c98-8c2e-a71b5407c4ea>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00185.warc.gz"}
Math Worksheets For Second Graders Printable Math Worksheets For Second Graders Printable - Web nearly 2,000 second grade math worksheets from scholastic teachables span more than 20 different math topics. A math website students love! Core math worksheets word problems algebra other worksheets probability worksheets prime and. 3rd grade math 4th grade math. Web nearly 2,000 second grade math worksheets from scholastic teachables span more than 20 different math topics. Math for week of june 5: Web 2nd grade math worksheets math worksheets go ad free! Math for week of june 19:. Web math worksheets workbooks for second grade; Web search printable 2nd grade worksheets reading, math, science, history—all of it, and more, starts to come fast and furious in second grade. Subtraction for Kids 2nd Grade A math website students love! Core math worksheets word problems algebra other worksheets probability worksheets prime and. Sign up and you'll also get access to scholastic's more than. Second grade math worksheets for june : Web nearly 2,000 second grade math worksheets from scholastic teachables span more than 20 different math topics. 2nd Grade Math Worksheets Multiplication Learning Printable Math for week of june 5: Web nearly 2,000 second grade math worksheets from scholastic teachables span more than 20 different math topics. Addition within 20 with no regrouping. Web 2nd grade math worksheets: 1st grade math 2nd grade math. Multiplication 2nd Grade Math Worksheets Pdf Kidsworksheetfun Web search printable 2nd grade worksheets reading, math, science, history—all of it, and more, starts to come fast and furious in second grade. Sign up and you'll also get access to scholastic's more than. Math for week of june 12: Splashlearn offers addition, multiplication, and other printable. Math for week of june 5: Repeated Addition Worksheet 2nd Grade Math Worksheets Printable Web math worksheets workbooks for second grade; Second grade math worksheets for june : Scholastic publishes new printable math worksheets each. Web math worksheets for 2nd graders. Addition within 10 with no regrouping. Printable 2nd Grade Timed Math Worksheets Math Worksheets Printable Web nearly 2,000 second grade math worksheets from scholastic teachables span more than 20 different math topics. Core math worksheets word problems algebra other worksheets probability worksheets prime and. Web 2nd grade math worksheets: Web 2nd grade math worksheets math worksheets go ad free! 5th grade math 6th grade math. Free 2nd Grade Math Worksheets Activity Shelter Addition within 20 with no regrouping. Web 2nd grade math worksheets: Web free 2nd grade addition math worksheets workbook. Web nearly 2,000 second grade math worksheets from scholastic teachables span more than 20 different math topics. 5th grade math 6th grade math. 2nd Grade Math Worksheets Best Coloring Pages For Kids Web free 2nd grade addition math worksheets workbook. Web these printable math worksheets are great for review, morning work, seatwork, math centers or stations, homework, assessment, and more.pages included: 3rd grade math 4th grade math. Web math worksheets for 2nd graders. 1st grade math 2nd grade math. 2nd Grade Math Worksheets Best Coloring Pages For Kids Discover learning games, guided lessons, and other interactive activities for children. 5th grade math 6th grade math. Splashlearn offers addition, multiplication, and other printable. Scholastic publishes new printable math worksheets each. Web 2nd grade math worksheets: Free Second Grade Math Practice Worksheets Math practice worksheets Splashlearn offers addition, multiplication, and other printable. Sign up and you'll also get access to scholastic's more than. Web these printable math worksheets are great for review, morning work, seatwork, math centers or stations, homework, assessment, and more.pages included: Web math worksheets workbooks for second grade; Web nearly 2,000 second grade math worksheets from scholastic teachables span more than 20. 2nd Grade Math Worksheets Best Coloring Pages For Kids Math for week of june 19:. Web math worksheets workbooks for second grade; Math for week of june 5: Addition within 20 with no regrouping. Second grade math worksheets for june : Each math worksheet has an answer sheet attached on the second page,. Web math worksheets workbooks for second grade; Addition within 20 with no regrouping. Core math worksheets word problems algebra other worksheets probability worksheets prime and. Web free 2nd grade addition math worksheets workbook. 5th grade math 6th grade math. 3rd grade math 4th grade math. Maths is an interesting subject, but children might find it boring sometimes. Addition within 10 with no regrouping. 1st grade math 2nd grade math. A math website students love! That’s why you’ll want to tap into. Discover learning games, guided lessons, and other interactive activities for children. Web nearly 2,000 second grade math worksheets from scholastic teachables span more than 20 different math topics. Sign up and you'll also get access to scholastic's more than. Scholastic publishes new printable math worksheets each. Web these printable math worksheets are great for review, morning work, seatwork, math centers or stations, homework, assessment, and more.pages included: Addition up to three digits, add and carry, addition word problems, subtraction up to 3 digits, mixed operations, data and graphs, sets and venn. Second grade math worksheets for june : (free printables) addition with no regrouping. Web 2Nd Grade Math Worksheets Math Worksheets Go Ad Free! Each math worksheet has an answer sheet attached on the second page,. Math for week of june 12: Web 2nd grade math worksheets: 3rd grade math 4th grade math. Second Grade Math Worksheets For June : Addition within 20 with no regrouping. 5th grade math 6th grade math. Maths is an interesting subject, but children might find it boring sometimes. Scholastic publishes new printable math worksheets That’s Why You’ll Want To Tap Into. Math for week of june 5: Core math worksheets word problems algebra other worksheets probability worksheets prime and. Math for week of june 19:. Ad practice 2nd grade math on ixl! Web Free 2Nd Grade Addition Math Worksheets Workbook. (free printables) addition with no regrouping. Web nearly 2,000 second grade math worksheets from scholastic teachables span more than 20 different math topics. Web search printable 2nd grade worksheets reading, math, science, history—all of it, and more, starts to come fast and furious in second grade. Web these printable math worksheets are great for review, morning work, seatwork, math centers or stations, homework, assessment, and more.pages included: Related Post:
{"url":"https://dl-uk.apowersoft.com/en/math-worksheets-for-second-graders-printable.html","timestamp":"2024-11-12T00:29:35Z","content_type":"text/html","content_length":"29357","record_id":"<urn:uuid:ef33a277-46a0-4346-b607-139b065cdea0>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00366.warc.gz"}
4. EGR 210 - SOLID MECHANICS Syllabus There are three objectives for this course. First it is important that you master the content of this course as this is the foundation for engineering analysis as well as almost all of the mechanical engineering courses that follow. Second, it is important that you master the engineering approach to problem solving. Finally, you will be expected to develop critical thinking skills by applying concepts learned in class the mechanical systems. Assistant Professor, School of Engineering Office: Pad 146 OR Suite 618, Eberhard Center www: http://claymore.engineer.gvsu.edu M, W, 2-3pm, F, 2-4pm, PAD 168 Monday, Wednesday, Friday 11-12 Mechanics of Materials, by J.M. Gere and S.P. Timoshenko, PWS Publishing EGR 209/210 -Statics and Solid Mechanics Lecture Notes, by H. Jack EGR 210 students they need to have taken a basic statics course There will be a final exam. All students will be expected to write tests at the scheduled time, make-up tests will be given only in the most extreme circumstances at the discretion of the instructor. You will not be able to learn this material if you do not do problems. To encourage you to do homework in a professional manner, random samples of the assignments will be collected and graded. This homework may be collected as soon as the next class after introduction, and when collected it is due immediately. All homework solutions should be logical, concise, clear, and readable.In general the following rules should be observed, • Do all work on engineering computation paper, or on Mathcad. • Multiple page solutions should be stapled and given page numbers. • At the top of the page indicate your name, the date the work was done, and the course number. • Each problem should begin with a brief problem statement (do not copy out the question). • Free body diagrams will be required for most solutions, and should appear before the calculations. • The problem solution should be concise, logical, clear, neat, and correct. • The final answer should be clearly indicated with a box, or leader lines. • Mathcad solutions should be done entirely within Mathcad (i.e., not with a calculator or scrap paper) The objective of this project is to build a beam with the highest failure load to weight ratio using approved materials. A list of approved materials and geometry constraints will be provided. All beams will be tested and a report will be required. The report will detail how the student applied concepts learned in the class to the project. The grade for this course will be determined as follows: 09/09 5 Axial and shear stress 6 Analysis of stress in rigid bodies 09/14 7 Oblique and generalized stress 12 Centroids and parallel axis theorem review 14 Loading and Factor if safety 18 Internal forces in beams review 10/05 19 Moments of Inertia Review The chart below shows how the numerical grades in the course will be converted to letter grades.
{"url":"https://engineeronadisk.com/V2/notes_courses/engineeronadisk-13.html","timestamp":"2024-11-09T03:46:33Z","content_type":"text/html","content_length":"11857","record_id":"<urn:uuid:738e03e2-5ef6-4f94-a7b9-d8151d5b0560>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00154.warc.gz"}
Temperature Change and Heat Capacity Learning Objectives By the end of this section, you will be able to: • Observe heat transfer and change in temperature and mass. • Calculate final temperature after heat transfer between two objects. One of the major effects of heat transfer is temperature change: heating increases the temperature while cooling decreases it. We assume that there is no phase change and that no work is done on or by the system. Experiments show that the transferred heat depends on three factors—the change in temperature, the mass of the system, and the substance and phase of the substance. The dependence on temperature change and mass are easily understood. Owing to the fact that the (average) kinetic energy of an atom or molecule is proportional to the absolute temperature, the internal energy of a system is proportional to the absolute temperature and the number of atoms or molecules. Owing to the fact that the transferred heat is equal to the change in the internal energy, the heat is proportional to the mass of the substance and the temperature change. The transferred heat also depends on the substance so that, for example, the heat necessary to raise the temperature is less for alcohol than for water. For the same substance, the transferred heat also depends on the phase (gas, liquid, or solid). Heat Transfer and Temperature Change The quantitative relationship between heat transfer and temperature change contains all three factors: Q = mcΔT, where Q is the symbol for heat transfer, m is the mass of the substance, and ΔT is the change in temperature. The symbol c stands for specific heat and depends on the material and phase. The specific heat is the amount of heat necessary to change the temperature of 1.00 kg of mass by 1.00ºC. The specific heat c is a property of the substance; its SI unit is J/(kg ⋅ K) or J/(kg ⋅ ºC). Recall that the temperature change (ΔT) is the same in units of kelvin and degrees Celsius. If heat transfer is measured in kilocalories, then the unit of specific heat is kcal/(kg ⋅ ºC). Values of specific heat must generally be looked up in tables, because there is no simple way to calculate them. In general, the specific heat also depends on the temperature. Table 1 lists representative values of specific heat for various substances. Except for gases, the temperature and volume dependence of the specific heat of most substances is weak. We see from this table that the specific heat of water is five times that of glass and ten times that of iron, which means that it takes five times as much heat to raise the temperature of water the same amount as for glass and ten times as much heat to raise the temperature of water as for iron. In fact, water has one of the largest specific heats of any material, which is important for sustaining life on Earth. Example 1. Calculating the Required Heat: Heating Water in an Aluminum Pan A 0.500 kg aluminum pan on a stove is used to heat 0.250 liters of water from 20.0ºC to 80.0ºC. (a) How much heat is required? What percentage of the heat is used to raise the temperature of (b) the pan and (c) the water? The pan and the water are always at the same temperature. When you put the pan on the stove, the temperature of the water and the pan is increased by the same amount. We use the equation for the heat transfer for the given temperature change and mass of water and aluminum. The specific heat values for water and aluminum are given in Table 1. Because water is in thermal contact with the aluminum, the pan and the water are at the same temperature. Calculate the temperature difference: ΔT = T[f] − T[i] = 60.0ºC. Calculate the mass of water. Because the density of water is 1000 kg/m^3, one liter of water has a mass of 1 kg, and the mass of 0.250 liters of water is m[w] = 0.250 kg. Calculate the heat transferred to the water. Use the specific heat of water in Table 1: Q[w] = m[w]c[w]ΔT = (0.250 kg)(4186 J/kgºC)(60.0ºC) = 62.8 kJ. Calculate the heat transferred to the aluminum. Use the specific heat for aluminum in Table 1: Q[Al] = m[Al]c[Al]ΔT = (0.500 kg)(900 J/kgºC)(60.0ºC) = 27.0 × 10^4 J = 27.0 kJ.< Compare the percentage of heat going into the pan versus that going into the water. First, find the total transferred heat: Q[Total] = Q[w] + Q[Al ]= 62.8 kJ + 27.0 kJ = 89.8 kJ. Thus, the amount of heat going into heating the pan is [latex]\frac{27.0\text{ kJ}}{89.8\text{ kJ}}\times100\%=30.1\%\\[/latex] and the amount going into heating the water is [latex]\frac{62.8\text{ kJ}}{89.8\text{ kJ}}\times100\%=69.9\%\\[/latex]. In this example, the heat transferred to the container is a significant fraction of the total transferred heat. Although the mass of the pan is twice that of the water, the specific heat of water is over four times greater than that of aluminum. Therefore, it takes a bit more than twice the heat to achieve the given temperature change for the water as compared to the aluminum pan. Example 2. Calculating the Temperature Increase from the Work Done on a Substance: Truck Brakes Overheat on Downhill Runs Truck brakes used to control speed on a downhill run do work, converting gravitational potential energy into increased internal energy (higher temperature) of the brake material. This conversion prevents the gravitational potential energy from being converted into kinetic energy of the truck. The problem is that the mass of the truck is large compared with that of the brake material absorbing the energy, and the temperature increase may occur too fast for sufficient heat to transfer from the brakes to the environment. Calculate the temperature increase of 100 kg of brake material with an average specific heat of 800 J/kg ⋅ ºC if the material retains 10% of the energy from a 10,000-kg truck descending 75.0 m (in vertical displacement) at a constant speed. If the brakes are not applied, gravitational potential energy is converted into kinetic energy. When brakes are applied, gravitational potential energy is converted into internal energy of the brake material. We first calculate the gravitational potential energy (Mgh) that the entire truck loses in its descent and then find the temperature increase produced in the brake material alone. 1. Calculate the change in gravitational potential energy as the truck goes downhill Mgh = (10,000 kg)(9.80 m/s^2)(75.0 m) = 7.35 × 10^6 J. 2. Calculate the temperature from the heat transferred using Q = Mgh and [latex]\Delta{T}=\frac{Q}{mc}\\[/latex], where m is the mass of the brake material. Insert the values m = 100 kg and c = 800 J/kg ⋅ ºC to find [latex]\Delta{T}=\frac{\left(7.35\times10^6\text{ J}\right)}{\left(100\text{ kg}\right)\left(800\text{ J/kg}^{\circ}\text{C}\right)}=92^{\circ}C\\[/latex]. This temperature is close to the boiling point of water. If the truck had been traveling for some time, then just before the descent, the brake temperature would likely be higher than the ambient temperature. The temperature increase in the descent would likely raise the temperature of the brake material above the boiling point of water, so this technique is not practical. However, the same idea underlies the recent hybrid technology of cars, where mechanical energy (gravitational potential energy) is converted by the brakes into electrical energy (battery). Table 1. Specific Heats^[1] of Various Substances Substances Specific heat (c) Solids J/kg ⋅ ºC kcal/kg ⋅ ºC^[2] Aluminum 900 0.215 Asbestos 800 0.19 Concrete, granite (average) 840 0.20 Copper 387 0.0924 Glass 840 0.20 Gold 129 0.0308 Human body (average at 37 °C) 3500 0.83 Ice (average, −50°C to 0°C) 2090 0.50 Iron, steel 452 0.108 Lead 128 0.0305 Silver 235 0.0562 Wood 1700 0.4 Benzene 1740 0.415 Ethanol 2450 0.586 Glycerin 2410 0.576 Mercury 139 0.0333 Water (15.0 °C) 4186 1.000 Air (dry) 721 (1015) 0.172 (0.242) Ammonia 1670 (2190) 0.399 (0.523) Carbon dioxide 638 (833) 0.152 (0.199) Nitrogen 739 (1040) 0.177 (0.248) Oxygen 651 (913) 0.156 (0.218) Steam (100°C) 1520 (2020) 0.363 (0.482) Note that Example 2 is an illustration of the mechanical equivalent of heat. Alternatively, the temperature increase could be produced by a blow torch instead of mechanically. Example 3. Calculating the Final Temperature When Heat Is Transferred Between Two Bodies: Pouring Cold Water in a Hot Pan Suppose you pour 0.250 kg of 20.0ºC water (about a cup) into a 0.500-kg aluminum pan off the stove with a temperature of 150ºC. Assume that the pan is placed on an insulated pad and that a negligible amount of water boils off. What is the temperature when the water and pan reach thermal equilibrium a short time later? The pan is placed on an insulated pad so that little heat transfer occurs with the surroundings. Originally the pan and water are not in thermal equilibrium: the pan is at a higher temperature than the water. Heat transfer then restores thermal equilibrium once the water and pan are in contact. Because heat transfer between the pan and water takes place rapidly, the mass of evaporated water is negligible and the magnitude of the heat lost by the pan is equal to the heat gained by the water. The exchange of heat stops once a thermal equilibrium between the pan and the water is achieved. The heat exchange can be written as |Q[hot]|=Q[cold]. Use the equation for heat transfer Q = mcΔT to express the heat lost by the aluminum pan in terms of the mass of the pan, the specific heat of aluminum, the initial temperature of the pan, and the final temperature: Q[hot] = m[Al]c[Al](T[f] − 150ºC). Express the heat gained by the water in terms of the mass of the water, the specific heat of water, the initial temperature of the water and the final temperature: Q[cold ]= m[W]c[W](T[f] − 20.0ºC). Note that Q[hot]<0 and Q[cold]>0 and that they must sum to zero because the heat lost by the hot pan must be the same as the heat gained by the cold water: This an equation for the unknown final temperature, T[f]. Bring all terms involving T[f] on the left hand side and all other terms on the right hand side. Solve for T[f], and insert the numerical values: [latex]\begin{array}{lll}T_{\text{f}}&=&\frac{\left(0.500\text{ kg}\right)\left(900\text{ J/kg}^{\circ}\text{C}\right)\left(150^{\circ}\text{C}\right)+\left(0.250\text{ kg}\right)\left(4186\text{ J/ kg}^{\circ}\text{C}\right)\left(20.0^{\circ}\text{C}\right)}{\left(0.500\text{ kg}\right)\left(900\text{ J/kg}^{\circ}\text{C}\right)+\left(0.250\text{ kg}\right)\left(4186\text{ J/kg}^{\circ}\text {C}\right)}\\\text{ }&=&\frac{88430\text{ J}}{1496.5\text{ J}/^{\circ}\text{C}}\\\text{ }&=&59.1^{\circ}\text{C}\end{array}\\[/latex] This is a typical calorimetry problem—two bodies at different temperatures are brought in contact with each other and exchange heat until a common temperature is reached. Why is the final temperature so much closer to 20.0ºC than 150ºC? The reason is that water has a greater specific heat than most common substances and thus undergoes a small temperature change for a given heat transfer. A large body of water, such as a lake, requires a large amount of heat to increase its temperature appreciably. This explains why the temperature of a lake stays relatively constant during a day even when the temperature change of the air is large. However, the water temperature does change over longer times (e.g., summer to winter). Take-Home Experiment: Temperature Change of Land and Water What heats faster, land or water? To study differences in heat capacity: • Place equal masses of dry sand (or soil) and water at the same temperature into two small jars. (The average density of soil or sand is about 1.6 times that of water, so you can achieve approximately equal masses by using 50% more water by volume.) • Heat both (using an oven or a heat lamp) for the same amount of time. • Record the final temperature of the two masses. • Now bring both jars to the same temperature by heating for a longer period of time. • Remove the jars from the heat source and measure their temperature every 5 minutes for about 30 minutes. Which sample cools off the fastest? This activity replicates the phenomena responsible for land breezes and sea breezes. Check Your Understanding If 25 kJ is necessary to raise the temperature of a block from 25ºC to 30ºC, how much heat is necessary to heat the block from 45ºC to 50ºC? The heat transfer depends only on the temperature difference. Since the temperature differences are the same in both cases, the same 25 kJ is necessary in the second case. Section Summary • The transfer of heat Q that leads to a change ΔT in the temperature of a body with mass m is Q = mcΔT, where c is the specific heat of the material. This relationship can also be considered as the definition of specific heat. Conceptual Questions 1. What three factors affect the heat transfer that is necessary to change an object’s temperature? 2. The brakes in a car increase in temperature by ΔT when bringing the car to rest from a speed v. How much greater would ΔT be if the car initially had twice the speed? You may assume the car to stop sufficiently fast so that no heat transfers out of the brakes. Problems & Exercises 1. On a hot day, the temperature of an 80,000-L swimming pool increases by 1.50ºC. What is the net heat transfer during this heating? Ignore any complications, such as loss of water by evaporation. 2. Show that 1 cal/g · ºC =1 kcal/kg · ºC. 3. To sterilize a 50.0-g glass baby bottle, we must raise its temperature from 22.0ºC to 95.0ºC. How much heat transfer is required? 4. The same heat transfer into identical masses of different substances produces different temperature changes. Calculate the final temperature when 1.00 kcal of heat transfers into 1.00 kg of the following, originally at 20.0ºC: (a) water; (b) concrete; (c) steel; and (d) mercury. 5. Rubbing your hands together warms them by converting work into thermal energy. If a woman rubs her hands back and forth for a total of 20 rubs, at a distance of 7.50 cm per rub, and with an average frictional force of 40.0 N, what is the temperature increase? The mass of tissues warmed is only 0.100 kg, mostly in the palms and fingers. 6. A 0.250-kg block of a pure material is heated from 20.0ºC to 65.0ºC by the addition of 4.35 kJ of energy. Calculate its specific heat and identify the substance of which it is most likely 7. Suppose identical amounts of heat transfer into different masses of copper and water, causing identical changes in temperature. What is the ratio of the mass of copper to water? 8. (a) The number of kilocalories in food is determined by calorimetry techniques in which the food is burned and the amount of heat transfer is measured. How many kilocalories per gram are there in a 5.00-g peanut if the energy from burning it is transferred to 0.500 kg of water held in a 0.100-kg aluminum cup, causing a 54.9ºC temperature increase? (b) Compare your answer to labeling information found on a package of peanuts and comment on whether the values are consistent. 9. Following vigorous exercise, the body temperature of an 80.0-kg person is 40.0ºC. At what rate in watts must the person transfer thermal energy to reduce the the body temperature to 37.0ºC in 30.0 min, assuming the body continues to produce energy at the rate of 150 W? 1 watt = 1 joule/second or 1 W = 1 J/s. 10. Even when shut down after a period of normal use, a large commercial nuclear reactor transfers thermal energy at the rate of 150 MW by the radioactive decay of fission products. This heat transfer causes a rapid increase in temperature if the cooling system fails (1 watt = 1 joule/second or 1 W = 1 J/s and 1 MW = 1 megawatt). (a) Calculate the rate of temperature increase in degrees Celsius per second (ºC/s) if the mass of the reactor core is 1.60 × 10^5 kg and it has an average specific heat of 0.3349 kJ/kg ⋅ ºC. (b) How long would it take to obtain a temperature increase of 2000ºC, which could cause some metals holding the radioactive materials to melt? (The initial rate of temperature increase would be greater than that calculated here because the heat transfer is concentrated in a smaller mass. Later, however, the temperature increase would slow down because the 5 × 10^5-kg steel containment vessel would also begin to heat up.) specific heat: the amount of heat necessary to change the temperature of 1.00 kg of a substance by 1.00 ºC Selected Solutions to Problems & Exercises 1. 5.02 × 10^8 J 3. 3.07 × 10^3 J 5. 0.171ºC 7. 10.8 9. 617 W 1. The values for solids and liquids are at constant volume and at 25ºC, except as noted. ↵ 2. These values are identical in units of cal/g ⋅ ºC. ↵ 3. c[v] at constant volume and at 20.0ºC, except as noted, and at 1.00 atm average pressure. Values in parentheses are c[p] at a constant pressure of 1.00 atm. ↵
{"url":"https://courses.lumenlearning.com/suny-physics/chapter/14-2-temperature-change-and-heat-capacity/","timestamp":"2024-11-03T07:42:51Z","content_type":"text/html","content_length":"72427","record_id":"<urn:uuid:5144ce8b-0b27-4b76-8c4e-f79755c24faa>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00516.warc.gz"}
How do you solve 2.3x^2-1.4x=6.8 using the quadratic formula? | HIX Tutor How do you solve #2.3x^2-1.4x=6.8# using the quadratic formula? Answer 1 See a solution process below: First, subtract #color(red)(6.8)# from each side of the equation to put the equation in standard form: #2.3x^2 - 1.4x - color(red)(6.8) = 6.8 - color(red)(6.8)# #2.3x^2 - 1.4x - 6.8 = 0# The quadratic equation can now be used to solve this problem: According to the quadratic formula, For #color(red)(a)x^2 + color(blue)(b)x + color(green)(c) = 0#, the values of #x# which are the solutions to the equation are given by: #x = (-color(blue)(b) +- sqrt(color(blue)(b)^2 - (4color(red)(a)color(green)(c))))/(2 * color(red)(a))# #color(red)(2.3)# for #color(red)(a)# #color(blue)(-1.4)# for #color(blue)(b)# #color(green)(-6.8)# for #color(green)(c)# gives: #x = (-color(blue)((-1.4)) +- sqrt(color(blue)((-1.4))^2 - (4 * color(red)(2.3) * color(green)(-6.8))))/(2 * color(red)(2.3))# #x = (color(blue)(1.4) +- sqrt(color(blue)(1.96) - (-62.56)))/4.6# #x = (color(blue)(1.4) +- sqrt(color(blue)(1.96) + 62.56))/4.6# #x = (color(blue)(1.4) +- sqrt(64.52))/4.6# #x = (color(blue)(1.4) +- sqrt(4 * 16.13))/4.6# #x = (color(blue)(1.4) +- sqrt(4)sqrt(16.13))/4.6# #x = (color(blue)(1.4) +- 2sqrt(16.13))/4.6# #x = (color(blue)(1.4) - 2sqrt(16.13))/4.6# and #x = (color(blue)(1.4) + 2sqrt(16.13))/4.6# #x = (color(blue)((2 * 0.7)) - 2sqrt(16.13))/(2 * 2.3)# and #x = (color(blue)((2 * 0.7)) + 2sqrt(16.13))/(2 * 2.3)# #x = (color(blue)(0.7) - sqrt(16.13))/2.3# and #x = (color(blue)(0.7) + sqrt(16.13))/2.3# Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 To solve the equation 2.3x^2 - 1.4x = 6.8 using the quadratic formula: a = 2.3 b = -1.4 c = -6.8 The quadratic formula is: [x = \frac{{-b \pm \sqrt{{b^2 - 4ac}}}}{{2a}}] Substitute the values of a, b, and c into the formula: [x = \frac{{-(-1.4) \pm \sqrt{{(-1.4)^2 - 4(2.3)(-6.8)}}}}{{2(2.3)}}] [x = \frac{{1.4 \pm \sqrt{{1.96 + 62.24}}}}{{4.6}}] [x = \frac{{1.4 \pm \sqrt{{64.2}}}}{{4.6}}] [x = \frac{{1.4 \pm 8.01}}{{4.6}}] [x_1 = \frac{{1.4 + 8.01}}{{4.6}} = \frac{{9.41}}{{4.6}} \approx 2.048] [x_2 = \frac{{1.4 - 8.01}}{{4.6}} = \frac{{-6.61}}{{4.6}} \approx -1.437] So, the solutions to the equation are approximately (x_1 = 2.048) and (x_2 = -1.437). Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/how-do-you-solve-2-3x-2-1-4x-6-8-using-the-quadratic-formula-8f9af99671","timestamp":"2024-11-13T07:58:20Z","content_type":"text/html","content_length":"584539","record_id":"<urn:uuid:16422adf-0f52-42b4-ab22-071ea4036f01>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00822.warc.gz"}
Aremco Ceramabind 642: A High-Temperature Inorganic Binder for Advanced Applications 31 Aug 2024 Introducing Aremco Ceramabind 642: The Ultimate High-Temperature Inorganic Binder In the pursuit of developing high-performance coatings and binders, researchers have been working tirelessly to create materials that can withstand extreme temperatures without compromising their integrity. One such breakthrough is the development of Aremco Ceramabind 642, a high-temperature inorganic binder that has taken the industry by storm. Key Properties and Benefits Aremco Ceramabind 642 boasts an impressive array of properties that make it an ideal choice for various applications. Some of its key features include: • Acid Resistance: Good • Dry Film Coverage (ft2/gal): 330, 474 • Gas Permeability: Gas Tight • Color: Off-White • Primer: Not Required • Mix Ratio: 75:25 This binder also exhibits excellent moisture resistance and alkali resistance, making it suitable for applications where exposure to water or alkaline substances is unavoidable. Electrical Properties The electrical properties of Aremco Ceramabind 642 are equally impressive: • Volume Resistivity: >=1.00e+13 ohm-cm • Dielectric Strength: 3.46 kV/mm • Dielectric Constant: 22.7@Frequency 1e+6 Hz • Dielectric Loss Index: 0.0016@Frequency 1e+6 Hz These properties make it an ideal choice for applications where electrical insulation and resistance are crucial. Physical Properties The physical properties of Aremco Ceramabind 642 include: • Viscosity: 500 - 1500 cP • Bulk Density: 1.73 g/cc • Water Absorption: 0.00 % • Density: 5.72 g/cc • Volatiles: 0.30 % • Thickness: 1.00 microns Chemical and Thermal Properties Aremco Ceramabind 642 also exhibits excellent chemical resistance, with a flash point of >=93.3 °C. Its thermal properties are equally impressive: • CTE, linear: 7.56 µm/m-°C • Shrinkage: <=0.30%@Temperature 538 °C • Thermal Conductivity: 3.00 W/m-K • Maximum Service Temperature, Air: 1650 °C Mechanical Properties The mechanical properties of Aremco Ceramabind 642 include: • Compressive Strength: 1860 MPa • Flexural Strength: 621 MPa • Impact Test: 27.1 J@Temperature 538 °C • Fracture Toughness: 12.0 MPa-m½ Aremco Ceramabind 642 is a high-performance inorganic binder that offers an unparalleled combination of properties and benefits. Its excellent acid resistance, dry film coverage, gas permeability, and moisture resistance make it an ideal choice for various applications. The electrical, physical, chemical, thermal, and mechanical properties of this binder are equally impressive, making it a versatile material for the industry. Aremco Ceramabind 642 is suitable for a wide range of applications, including: • High-temperature coatings • Ceramic and metal powder binding • Electrical insulation and resistance • Moisture-resistant applications • Alkali-resistant applications This binder is an excellent choice for researchers, engineers, and manufacturers seeking to develop high-performance materials that can withstand extreme temperatures and environmental conditions. High Temperature Coatings material usage related topics Academic Chapters on the topic 3D Models of relevance
{"url":"https://blog.truegeometry.com/designs3D/High_Temperature_Coatings_Aremco_Ceramabind_642_High_Temperature_Inorganic_Binder20240831.html","timestamp":"2024-11-10T08:31:32Z","content_type":"text/html","content_length":"49664","record_id":"<urn:uuid:0eb46ebc-f378-48c4-9204-ff8338a68d21>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00438.warc.gz"}
Definition Classes Union → SimplyTypedNode → Node A union of type (CollectionType(c, t), CollectionType(_, t)) => CollectionType(c, t). Linear Supertypes Type Hierarchy Learn more about scaladoc diagrams 1. final def !=(arg0: Any): Boolean Definition Classes AnyRef → Any 2. final def ##(): Int Definition Classes AnyRef → Any 3. def +(other: String): String Implicit information This member is added by an implicit conversion from Union to any2stringadd[Union] performed by method any2stringadd in scala.Predef. Definition Classes 4. def ->[B](y: B): (Union, B) Implicit information This member is added by an implicit conversion from Union to ArrowAssoc[Union] performed by method ArrowAssoc in scala.Predef. Definition Classes 5. final def :@(newType: Type): Self Return this Node with a Type assigned (if no other type has been seen for it yet) or a typed copy. Return this Node with a Type assigned (if no other type has been seen for it yet) or a typed copy. Definition Classes 6. final def ==(arg0: Any): Boolean Definition Classes AnyRef → Any 7. final def asInstanceOf[T0]: T0 8. final def buildCopy: Self Build a copy of this node with the current children. Build a copy of this node with the current children. Definition Classes BinaryNode → Node 9. def buildType: Type 10. def childNames: Seq[String] Names for the child nodes to show in AST dumps. Names for the child nodes to show in AST dumps. Defaults to a numbered sequence starting at 0 but can be overridden by subclasses to produce more suitable names. Definition Classes Union → Node All child nodes of this node. All child nodes of this node. Must be implemented by subclasses. Definition Classes BinaryNode → Node 12. final def childrenForeach[R](f: (Node) ⇒ R): Unit Apply a side-effecting function to all direct children from left to right. Apply a side-effecting function to all direct children from left to right. Note that is equivalent to but can be implemented more efficiently in Node subclasses. n.children.foreach(f) }}} implemented more efficiently in Node subclasses. n.childrenForeach(f) }}} implemented more efficiently in Node subclasses. Definition Classes BinaryNode → Node 13. def clone(): AnyRef Definition Classes @throws( ... ) 14. def ensuring(cond: (Union) ⇒ Boolean, msg: ⇒ Any): Union Implicit information This member is added by an implicit conversion from Union to Ensuring[Union] performed by method Ensuring in scala.Predef. Definition Classes 15. def ensuring(cond: (Union) ⇒ Boolean): Union Implicit information This member is added by an implicit conversion from Union to Ensuring[Union] performed by method Ensuring in scala.Predef. Definition Classes 16. def ensuring(cond: Boolean, msg: ⇒ Any): Union Implicit information This member is added by an implicit conversion from Union to Ensuring[Union] performed by method Ensuring in scala.Predef. Definition Classes 17. def ensuring(cond: Boolean): Union Implicit information This member is added by an implicit conversion from Union to Ensuring[Union] performed by method Ensuring in scala.Predef. Definition Classes 18. final def eq(arg0: AnyRef): Boolean 19. def finalize(): Unit Definition Classes @throws( classOf[java.lang.Throwable] ) 20. def formatted(fmtstr: String): String Implicit information This member is added by an implicit conversion from Union to StringFormat[Union] performed by method StringFormat in scala.Predef. Definition Classes 21. final def getClass(): Class[_] Definition Classes AnyRef → Any 22. def getDumpInfo: DumpInfo Return the name, main info, attribute info and named children Return the name, main info, attribute info and named children Definition Classes Union → Node → Dumpable 23. def hasType: Boolean Check if this node has a type without marking the type as seen. Check if this node has a type without marking the type as seen. Definition Classes 24. final def infer(scope: Scope = Map.empty, typeChildren: Boolean = false): Self Rebuild this node and all children with their computed type. Rebuild this node and all children with their computed type. If this node already has a type, the children are only type-checked again if typeChildren is true. if retype is also true, the existing type of this node is replaced. If this node does not yet have a type, the types of all children are computed first. Definition Classes 25. final def isInstanceOf[T0]: Boolean 26. val left: Node 27. final def mapChildren(f: (Node) ⇒ Node, keepType: Boolean = false): Self Apply a mapping function to all children of this node and recreate the node with the new children. Apply a mapping function to all children of this node and recreate the node with the new children. If all new children are identical to the old ones, this node is returned. If keepType is true, the type of this node is kept even when the children have changed. Definition Classes BinaryNode → Node 28. final def ne(arg0: AnyRef): Boolean 29. def nodeType: Type The current type of this node. The current type of this node. Definition Classes 30. final def notify(): Unit 31. final def notifyAll(): Unit 32. def peekType: Type Get the current type of this node for debug output without marking it as seen. Get the current type of this node for debug output without marking it as seen. Definition Classes 33. def rebuild(left: Node, right: Node): Union Rebuild this node with a new list of children. Rebuild this node with a new list of children. Implementations of this method must not reuse the current node. This method always returns a fresh copy. Definition Classes BinaryNode → Node 35. val right: Node 36. final def synchronized[T0](arg0: ⇒ T0): T0 37. final def toString(): String Definition Classes Node → AnyRef → Any 38. final def untyped: Self Return this Node with no Type assigned (if it has not yet been observed) or an untyped copy. Return this Node with no Type assigned (if it has not yet been observed) or an untyped copy. Definition Classes 39. final def wait(): Unit Definition Classes @throws( ... ) 40. final def wait(arg0: Long, arg1: Int): Unit Definition Classes @throws( ... ) 41. final def wait(arg0: Long): Unit Definition Classes @throws( ... ) Rebuild this node with new child nodes unless all children are identical to the current ones, in which case this node is returned. Rebuild this node with new child nodes unless all children are identical to the current ones, in which case this node is returned. Definition Classes 43. final def withInferredType(scope: Scope, typeChildren: Boolean): Self 44. def →[B](y: B): (Union, B) Implicit information This member is added by an implicit conversion from Union to ArrowAssoc[Union] performed by method ArrowAssoc in scala.Predef. Definition Classes
{"url":"https://scala-slick.org/doc/3.1.1/api/slick/ast/Union.html","timestamp":"2024-11-07T19:11:06Z","content_type":"text/html","content_length":"74567","record_id":"<urn:uuid:df5a2c81-170b-4a06-b856-17ab619ea3d9>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00012.warc.gz"}
This module provides the building blocks required to create finite state machines. newtype MealyT :: (Type -> Type) -> Type -> Type -> Typenewtype MealyT f i o Mealy is a finite state machine, where: • f is the effect under which we evaluate, • i is the input type and • o is the output type. hoistMealyT :: forallf g i. Functor g => (f ~> g) -> (MealyT f i) ~> (MealyT g i) Transforms a Mealy machine running in the context of f into one running in g, given a natural transformation from f to g. data Step :: (Type -> Type) -> Type -> Type -> Typedata Step f i o Step is the core for running machines. Machines can either stop via the Halt constructor, or emit a value and recursively construct the rest of the machine. type Source :: (Type -> Type) -> Type -> Typetype Source f o = MealyT f Unit o Sources are 'initial nodes' in machines. They allow for data to be generated. type Sink :: (Type -> Type) -> Type -> Typetype Sink f i = MealyT f i Unit Sinks are 'terminator nodes' in machines. They allow for an effectful computation to be executed on the inputs. source :: forallf o. Functor f => f o -> Source f o Wrap an effectful value into a source. The effect will be repeated indefinitely. For example, generating ten instances of the value 1: take 10 $ source (pure 1) sink :: forallf i. Functor f => (i -> f Unit) -> Sink f i Construct a machine which executes an effectful computation on its inputs. For example, logging could be used as a sink: take 10 $ source (pure 1) >>> sink logShow stepMealy :: forallf i o. i -> MealyT f i o -> f (Step f i o) Execute (unroll) a single step on a machine. runMealy :: forallf. Monad f => MealyT f Unit Unit -> f Unit Run a machine as an effectful computatation. For example: runMealy $ take 10 $ source (pure 1) >>> sink logShow pureMealy :: forallf i o. Applicative f => (i -> Step f i o) -> MealyT f i o Wrap a pure function into a machine. The function can either terminate via Halt, or Emit a value and then decide whether to Halt, continue with a different function, or (usually) wrap itself via pureMealy recursively. For example, we can Halt on zero: haltOn0 :: forall f. Applicative f => MealyT f Int Int haltOn0 = pureMealy go go 0 = Halt go n = Emit n (pureMealy haltOn0) mealy :: forallf i o. (i -> f (Step f i o)) -> MealyT f i o Wrap an effectful function into a machine. See pureMealy for an example using pure functions. take :: forallf i o. Applicative f => Int -> MealyT f i o -> MealyT f i o Limit the number of outputs of a machine. After using up the n allotted outputs, the machine will halt. toUnfoldable :: forallf g i o. Unfoldable g => Comonad f => i -> MealyT f i o -> g o Extract all the outputs of a machine, given some input. zipWith :: forallf i a b c. Apply f => (a -> b -> c) -> MealyT f i a -> MealyT f i b -> MealyT f i c Zip two machines together under some function f. scanl :: forallf i a b. Functor f => (b -> a -> b) -> b -> MealyT f i a -> MealyT f i b Accumulate the outputs of a machine into a new machine. fromMaybe :: forallf i o. Applicative f => Maybe o -> MealyT f i o Creates a machine which either emits a single value before halting (for Just), or just halts (in the case of Nothing). fromArray :: forallf i o. Monad f => Array o -> MealyT f i o Creates a machine which emits all the values of the array before halting. msplit :: forallf i o. Applicative f => MealyT f i o -> MealyT f i (Maybe (Tuple o (MealyT f i o))) Unwrap a machine such that its output is either Nothing in case it would halt, or Just the output value and the next computation. interleave :: forallf i o. Monad f => MealyT f i o -> MealyT f i o -> MealyT f i o Interleaves the values of two machines with matching inputs and outputs. when :: forallf i a b. Monad f => MealyT f i a -> (a -> MealyT f i b) -> MealyT f i b Given a machine and a continuation, it will pass outputs from the machine to the continuation as long as possible until one of them halts. ifte :: forallf i a b. Monad f => MealyT f i a -> (a -> MealyT f i b) -> MealyT f i b -> MealyT f i b If then else: given a machine producing a, a continuation f, and a machine producing b, generate a machine which will grab outputs from the first machine and pass them over to the continuation as long as neither halts. Once the process halts, the second (b) machine is returned. wrapEffect :: forallf i o. Applicative f => f o -> MealyT f i o Creates a machine which wraps an effectful computation and ignores its input.
{"url":"https://pursuit.purescript.org/packages/purescript-machines/7.0.0/docs/Data.Machine.Mealy","timestamp":"2024-11-12T03:35:45Z","content_type":"text/html","content_length":"66028","record_id":"<urn:uuid:295c0718-3efd-4b1d-89fd-a3c577c3578e>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00102.warc.gz"}
Introduction to Cryptography Lectures Mondays 11:00am-12:50pm WWH 517 Instructor Oded Regev Office hours Mondays 9:45am-10:45am, WWH 303 Reading Introduction to Cryptography, by Jonathan Katz and Yehuda Lindell. A good introductory book. Foundations of Cryptography, Vol. 1 and 2 by Oded Goldreich. A comprehensive book for those who want to understand the material in greater depth. Lecture notes by Yevgeniy Dodis, which we'll follow closely Lecture notes by Chris Peikert Lecture notes by Rafael Pass and Abhi Shelat. Last year's course Requirements Active participation in class, homework assignments, final exam Prerequisites Students are expected to be comfortable reading and writing mathematical proofs, be at ease with algorithmic concepts, and have elementary knowledge of discrete math, number theory, and basic probability. No programming will be required for the course. Date Class Topic Sep 14 Introduction, Perfect Secrecy. Number theory. Lectures 1+2 of Peikert, Lecture 1 of Dodis, Section 1.3 of Pass-Shelat. Sep 21 (Proof of Shannon's Theorem) Finishing number theory. One-way functions (and collections thereof). Weak one-way functions. Sep 28 Examples of one-way functions. A bit on going from weak to strong OWFs. Weak OWFs to strong OWFs. Informal discussion of indistinguishability and pseudorandom generators. Oct 5 Collections of one-way functions. More examples of OWFs. Application of OWFs to password storage. Oct 13 Indistinguishability. Pseudorandom generators. Expanding PRGs. Oct 19 Blum-Micali PRG. Hard-core bits. Goldreich-Levin; Pseudorandom functions: motivation and definition Oct 26 Constructing Pseudorandom functions Nov 2 Pseudorandom permutations and Luby-Rackoff; symmetric key encryption, definitions of security and constructions Nov 9 Finishing symmetric key encryption; public key encryption Nov 11 Trapdoor one-way permutations; Diffie-Hellman protocol and ElGamal cryptosystem. Authentication (model only) Nov 23 Semantic security of PKE. Authentication security definition and info theoretic construction. Nov 30 Computational construction of MAC using PRF. Expanding input of MACs using CRHF or almost universal hash functions. Dec 7 Authenticated encryption. Digital Signatures. Zero Knowledge Dec 9 Lattice-based cryptography (bonus class) Dec 14 Final exam
{"url":"https://cims.nyu.edu/~regev/teaching/crypto_fall_2015/","timestamp":"2024-11-12T16:43:33Z","content_type":"text/html","content_length":"11356","record_id":"<urn:uuid:8af9d21d-e724-4cb0-9c1b-e61497774cba>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00750.warc.gz"}
MIPS Multiplication: Using MUL, MULT and SLL - Techwarior MIPS multiplication is a little bit tricky as compared to addition and subtraction, but here we will simplify it for you. To learn MIPS multiplication, you must go through the following topics: MIPS multiplication uses arithmetic and logical format, and it can be performed using two opcode MUL and MULT. Both opcodes have a little bit difference in operation and syntax. We will discuss in detail below: MIPS Multiplication Using MUL The following example program describing the functionality of “mul” opcode. The mul instruction needed three operands registers. MUL can be used in three possible ways: • MULTIPLY REGISTERS(mul $t1,$t2,$t3) The above statement is also known as multiplications with overflow. Note the Hi register will be used to store high-order 32 bits, Lo and $t1 to low order 32 bits of the product of $t2 and $t3(we can use mfhi to access HI, mflo to access Lo) • MULTIPLY 16-bit SIGNED IMMEDIATE(mul $t1,$t2,-200) Note Hi register to store high-order 32 bits, Lo and $t1 to low order 32 bits of the product of $t2 and any 16 bit signed immediate(we can use mfhi to access HI, mflo to access Lo) • MULTIPLY 32-bit SIGNED IMMEDIATE(mul $t1,$t2,100021) Note Hi register to store high-order 32 bits, Lo and $t1 to low order 32 bits of the product of $t2 and any 32 bit signed immediate(we can use mfhi to access HI, mflo to access Lo) Example Multiplication Program Using MUL m: .asciiz "The result of multiplication is: " addi $s0, $zero,10 addi $s1, $zero,4 mul $t0, $s0,$s1 li $v0,4 la, $a0,m li $v0,1 add $a0, $zero,$t0 li $v0,10 The result of multiplication is: 40 In the above program, we initialized registers $s0 and $s1 with values 10 and 4. Then we performed multiplication using mul opcode. And at last, we printed the result of the multiplication. MIPS Multiplication Using MULT The following example program describing the functionality of “mult” opcode. The mult instruction needed two operands registers. Example Multiplication Program Using MULT n1: .asciiz "Enter the first value: " n2: .asciiz "Enter the second value: " m: .asciiz "The result of multiplication is: " li $v0,4 la $a0,n1 li $v0,5 move $t0,$v0 li $v0,4 la $a0,n2 li $v0,5 move $t1,$v0 mult $t0,$t1 mflo $s0 li $v0,1 add $a0, $zero,$s0 li $v0,10 Enter the first number: 4 Enter the second number: 7 THe result of multiplication is: 28 In the above program, we take two values from the user and then we performed multiplication using mult instruction. If we are multiplying two 32-bit number then the result become 64-bit so we need the first 32-bit result that is in lo(first 32 bit). In order to move the result to a register, we used instruction mflo. MIPS Multiplication Using SLL Sll also called shift left logical. It is the most efficient way to multiply the number in power of 2. Example Multiplication Program Using SLL addi $s0,$zero,2 sll $t0,$s0,1 li $v0,1 move $a0,$t0 li $v0,10 We use shift logical left instruction for multiplication. we have stored the value 2 in the register $t0. In line#6 we performed the logical operation. we have given the value 1, so it will be shifted one time. State your Queries in the comment box!
{"url":"https://techwarior.com/mips-multiplication-mul-mult-sll/","timestamp":"2024-11-05T05:48:26Z","content_type":"text/html","content_length":"147397","record_id":"<urn:uuid:b1f38d93-a417-4d8f-88a4-51dadcebabee>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00569.warc.gz"}
Shahar Shamai and Efi Fogel Movable Separability of Sets [2] is a class of problems that deal with moving sets of objects, such as polygons in the plane; the challenge is to avoid collisions between the objects while considering different kinds of motions and various definitions of separation. The Moving sofa problem or sofa problem is a classic member of this class. It is a two-dimensional idealization of real-life furniture-moving problems; it asks for the rigid two-dimensional shape of largest area \(A\) that can be maneuvered through an L-shaped planar region with legs of unit width [3]. The area \ (A\) thus obtained is referred to as the sofa constant. The exact value of the sofa constant is an open problem; see Figure 25.1. These problems become progressively more challenging as the allowable set of separation motions becomes more complex (have more degrees of freedom), the number of objects involved grows, or the shape of the objects becomes more complicated. At this point this package provides solutions to one subclass of problems related to 2D castings. In particular, each of these solutions handles a single moving polygon and a single stationary polygon, and considers a single translation of the moving polygon. Casting is a manufacturing process where liquid material is poured into a cavity inside a mold, which has the shape of a desired product. (The mold can take any shape and form as long as it has a cavity of the desired shape.) After the material solidifies, the product is pulled out of the mold. Typically a mold is used to manufacture numerous copies of a product. The challenge is designing a proper mold, such that the solidified product can be separated from its mold without breaking it. This package provides a function called CGAL::Set_movable_separability_2::Single_mold_translational_casting::top_edges() that, given a simple closed polygon \(P\), determines whether a cavity (of a mold in the plane) that has the shape of \(P\) can be used so that the polygon \(P\) could be pulled out of the mold without colliding into the mold (but possibly sliding along the mold boundary); see Figure 25.2 for an illustration. In reality, the mold of a castable polygon must be rotated before the polygon is casted, such that one edge becomes parallel to the \(x\)-axis and is located above all other edges; such an edge is referred to as a top edge. A polygon may have up to four edges that can serve as top edges. If the polygon is castable, the function computes the set of top edges of such cavities and the corresponding closed ranges of pullout directions in the plane. The input polygon must satisfy two conditions as follows. First, it has to be simple. Essentially, a simple polygon is topologically equivalent to a disk; see Chapter 2D Regularized Boolean Set-Operations for the precise definition of simple polygons. Secondly, any consecutive three vertices cannot be collinear. If you suspect that the input polygon may not satisfy the latter condition, pre-process the polygon to eliminate this ill-condition. The implementation is based on an algorithm developed by Shamai and Halperin; see [1] for the generalization of the algorithm to 3D. The time and space complexities are in \(O(n)\) and \(O(1)\), respectively. In order to ensure robustness and correctness you must use a kernel that guarantees exact constructions as well as exact predicates, e,g,. Exact_predicates_exact_constructions_kernel. The following example computes the top edges and their pullout directions of an input polygon read from a file and reports the results. File Set_movable_separability_2/top_edges_single_mold_trans_cast.cpp Polygon_2 polygon; std::ifstream input_file(filename); input_file >> polygon; std::list<Top_edge> top_edges; casting::top_edges(polygon, std::back_inserter(top_edges)); << std::endl << std::endl; This package provides two additional functions, namely, CGAL::Set_movable_separability_2::Single_mold_translational_casting::pullout_directions() and CGAL::Set_movable_separability_2::Single_mold_translational_casting::is_pullout_direction(). The former accepts a simple closed polygon \(P\) and an edge \(e\) of the polygon \(P\); it determines whether \(e\) is a top edge of \(P\), and if so, it computes the range of pullout directions of \(e\). The latter is overloaded with two versions: The first version accepts a simple closed polygon \ (P\) and a direction \(d\); it determines whether \(d\) is a pullout direction of some top edge of \(P\). The other version accepts, in addition, an edge \(e\) of the polygon \(P\); it determines whether \(d\) is a pullout direction of \(e\). Overloads of each of the functions above that accept (i) an additional argument that indicates the orientation of the input polygon or (ii) an additional traits argument, or (iii) both, are also provided by the package.
{"url":"https://cgal.geometryfactory.com/CGAL/doc/master/Set_movable_separability_2/index.html","timestamp":"2024-11-09T00:42:48Z","content_type":"application/xhtml+xml","content_length":"22307","record_id":"<urn:uuid:a6ab44ee-f1b8-4dbf-918c-721970067d4b>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00357.warc.gz"}
TSstudio 0.1.3 I used the Thanksgiving break to push a new update of the TSstudio package to CRAN (version 0.1.3). The new version includes an update for the ts_backtesting function along with two new function - ts_to_prophet for converting time series objects to a prophet input format (i.e., ds and y columns), and ccf_plot for lags plot between two time series. The package can be installed from either CRAN or Github: # CRAN # Github # install.packages("devtools") ## [1] '0.1.5' Converting time series object to a prophet format The ts_to_prophet function converting ts, xts and zoo objects into prophet input format (i.e., data frame with two columns - ds for date and y for the series values). For instance, convertig the USgas series to a prophet object: ## The USgas series is a ts object with 1 variable and 235 observations ## Frequency: 12 ## Start time: 2000 1 ## End time: 2019 7 USgas_prophet <- ts_to_prophet(USgas) ## [1] 2510.5 2330.7 2050.6 1783.3 1632.9 1513.1 ## ds y ## 1 2000-01-01 2510.5 ## 2 2000-02-01 2330.7 ## 3 2000-03-01 2050.6 ## 4 2000-04-01 1783.3 ## 5 2000-05-01 1632.9 ## 6 2000-06-01 1513.1 In the case of a ts object, where the index is not a date object, the function extracts the time component from the first observation and use it along with the frequency of the series to estimate the date column of the prophet data frame. For instance, in the case of a monthly series, where the time object provides only the year and the month, by default the day component of the date object will be set to 1. Alternatively, if known, you can set the date of the first observation with the start argument. For example, if the USgas series is being captured during the mid of the month (or every 15th of the month): USgas_prophet <- ts_to_prophet(USgas, start = as.Date("2000-01-15")) ## ds y ## 1 2000-01-15 2510.5 ## 2 2000-02-15 2330.7 ## 3 2000-03-15 2050.6 ## 4 2000-04-15 1783.3 ## 5 2000-05-15 1632.9 ## 6 2000-06-15 1513.1 Similarly, the function can handle xts and zoo objects: ## The EURO_Brent series is a zoo object with 1 variable and 389 observations ## Frequency: monthly ## Start time: May 1987 ## End time: Sep 2019 ## May 1987 Jun 1987 Jul 1987 Aug 1987 Sep 1987 Oct 1987 ## 18.58 18.86 19.86 18.98 18.31 18.76 ts_to_prophet(EURO_Brent) %>% head() ## ds y ## 1 1987-05-01 18.58 ## 2 1987-06-01 18.86 ## 3 1987-07-01 19.86 ## 4 1987-08-01 18.98 ## 5 1987-09-01 18.31 ## 6 1987-10-01 18.76 Lags plots of two series The second function, ccf_plot, provides an interactive and intuitive visualization of the cross-correlation between two time series, by plotting a series against another series (and its lags) and calculating the correlation between the two with the ccf function. For instance, let’s use the function to plot the relationship between the unemployment rate and the total vehicle sales in the US: ## The USUnRate series is a ts object with 1 variable and 861 observations ## Frequency: 12 ## Start time: 1948 1 ## End time: 2019 9 ## The USVSales series is a ts object with 1 variable and 525 observations ## Frequency: 12 ## Start time: 1976 1 ## End time: 2019 9 ccf_plot(x = USVSales, y = USUnRate) The function automatically aligned and used only the overlapping observations of the two series before calculating the cross-correlation values between the series and the lags of the second series (where the 0 lag represents the series itself, and negative lags represent the leading lags). The title of each plot specifies the lag number and the cross-correlation value. The lags argument of the function defines the number of lags in the plot, where the use of negative lags defines the leading indicators. For example, setting the lags argument to -6:6 will plot the first 6 lags, the series itself and the first 6 leading lags of the series: ccf_plot(x = USVSales, y = USUnRate, lags = -6:6) Forecasting with backtesting and xreg The ts_backtesting function for training and testing multiple models (e.g., auto.arima, HoltWinters, nnetar, etc.) with backtesting approach, is now supporting the xreg component of the auto.arima, nnetar ( forecast package)and their embedment in the hybridModel model ( forecastHybrid package). The use of the xreg component is straightforward and required two components: • The predictors - or the regressors component in a vector or matric format will be used as an input to the model xreg argument. The length of this input must be aligned with the length of the input series • The future values of the predictors - a vector or matrix must correspond to the inputs which used as predictors, where the length of this component must be aligned to the forecast horizon (or the h argument of the function). This setting of this component is done with the xreg.h argument For instance, let’s forecast the monthly consumption of natural gas in US in the next 5 years (or 60 months) by regressing the USgas series with its Fourier terms, using auto.arima, nnetar and hybridModel models. We will use the fourier function from the forecast package to generate both the inputs for the regression model (x_reg) and future values for the forecast itself (x_reg.forecast): # Setting the forecast horizon h <- 60 # Creating the xreg component for the regression x_reg <- fourier(USgas, K = 5) # Creating the xreg component for the forecast x_reg.forecast <- forecast::fourier(USgas, K = 5, h = h) Note that the ts_backtesting function automatically split and aligned the xreg component according to the expanding window movement of the function. We will set the function to run backtesting using 6 periods/splits to train auto.arima, nnetar and hybridModel models, in order to examine the performance of the models over time: md <- ts_backtesting(ts.obj = USgas, error = "MAPE", models = "anh", periods = 6, h = h, xreg.h = x_reg.forecast, a.arg = list(xreg = x_reg), h.arg = list(models = "aetsfz", a.args = list(xreg = x_reg), verbose = FALSE), n.arg = list(xreg = x_reg), plot = FALSE) ## Model_Name avgMAPE sdMAPE avgRMSE sdRMSE ## 1 hybrid 3.2883333 0.95547719 100.35500 33.593355 ## 2 auto.arima 4.2500000 1.81685442 123.41333 34.677851 ## 3 nnetar 4.5200000 1.65343279 133.14333 50.975960 We can now review the performance of each model using the summary plot: The summary plot provides the error distribution of each model and the plot forecasting model which performed best on the backtesting. The output contains the models’ performance on the backtesting (i.e., summary plot and leaderboard). In this case, since we set the error argument to MAPE, the function selected the auto.arima final forecast. Yet, you can see in the plot that the error rate of the hybrid model is more stable compared to the auto.arima and it might be a better choice (the hybrid contains both the auto.arima and other models, which potentially helps to hedge the error). All the models’ information available on the Forecast_Final folder. For example, you can pull the auto.arima model and check its residuals: The plan for future releases is to expend the functionality of the ts_backtesting function, by adding additional models (e.g., tslm, prophet, etc.) and expend the window setting of the backtesting (adding sliding window option).
{"url":"https://ramikrispin.github.io/2018/12/tsstudio-0.1.3/","timestamp":"2024-11-06T23:07:40Z","content_type":"text/html","content_length":"410878","record_id":"<urn:uuid:a368ef1e-210f-4335-9f62-e03d24f093cf>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00692.warc.gz"}
sarima 0.9.3 • removed ‘FitARMA’ from ‘Suggests:’; it had not been needed for some time. • tsdiag.Sarima was sometimes presenting the menu of choices when that was not needed or asked for (e.g., when plot = 1:4 and ‘layout’ a two-by-two matrix), a bug introduced in v0.9.2. sarima 0.9.2 • the ‘Sarima’ method for tsdiag now splits the window into less than 3 subwindows when the number of choices is less than 3. As before, if argument layout is supplied, then it is used unconditionally to set the layout of the plots. • no longer require C++11, thus relying on R to do the right thing. Also, avoids the following NOTE from recent R-devel: ‘Specified C++11: please drop specification unless essential’. sarima 0.9.1 • included instructions how to install package ‘FitARMA’, if it is needed. • fixed NOTEs from CRAN about escaped LaTeX specials. • added the expanded stationary AR polynomial to the output of the "SarimaModel" method for filterPoly and filterPolyCoef. The fully expanded AR polynomial which includes also the integrated terms is availalbe as before. • in prepareSimSarima (and hence sim_sarima) fixed a bug causing wrong results for some combinations of parameters when initial values were supplied. Also removed some parts in the documentation of these functions which no longer applied. sarima 0.9 • new generic function FisherInformation giving the information matrix for fitted and theoretical models with methods for ARMA and seasonal ARMA models. • new generic function spectrum with methods for (seasonal) ARMA models and default stats::spectrum. sarima 0.8.6 • in tsdiag.Sarima(), if argument plot specifies only one or two plots, then the window is now split into 1 or 2 sub-windows, respectively, even if argument layout is not used. • new convenience function, se() to compute standard errors. • confint methods extended and documented. • extensive changes in the documentation, including reorganisation of the pkgdown site. • moved fkf and KFAS to Suggests and removed dplyr from the dependencies. sarima 0.8.5 • new tsdiag method for class Sarima (the result of sarima()). The method can be called also directly on the output from base R’s arima() with tsdiag.Sarima() or sarima::tsdiag.Sarima(). The method offers several portmanteau tests (including Ljung-Box, Li-McLeod and Box-Pierce), plots of autocorrelations and partial autocorrelations of the residuals, ability to control which graphs to be produced (including interactively), and their layout. The computed results are returned (invisibly). The default layout of the graphs is similar to stats::tsdiag() (but with adjusted d.f.). The method always makes a correction of the degrees of freedom of the portmanteau tests. • github repository housekeeping - switched from TravisCI to Github actions. • now the pkgdown website is automatically rebuild on push (via a github action). • moved FitAR from Depends to Imports (after some changes in .onLoad() to make this possible). sarima 0.8.4 • updated a reference to avoid redirect. sarima 0.8.2 • import again FKF (support for it was removed when FKF was temporarily archived on CRAN). • removed developers’ comments that had been accidentally left in a vignette. • removed an erroneous rev() from the garch tests vignette. • added new tests and fixed several bugs in the process. • the show method for class “ArmaModel” now returns NULL. The previous return value was spooking “pkgdown::build_site()” resulting in the error: Error in UseMethod("replay_html", x) : no applicable method for 'replay_html' applied to an object of class "c('double', 'numeric')" sarima 0.8.1 • relaxed numerical comparisons in some tests, to account for additional platforms, such as Open-BLAS, recently activated for checks on CRAN. sarima 0.8.0 • new test for GARCH-type noise based on Kokoszka and Politis result. • more complete sets of methods for several functions. In particular, there was infinite recursion in some cases. • bug fixes • improved show() methods for autocovariance objects • numerous changes in sarima() • cater for changing function names in the forthcoming release 2.0.0 of package PolynomF. • Now require lagged (>= 0.2.1) (lagged 0.2.0 is not sufficient since nSeasons() and nSeasons<-() accidentally were not exported by it). • Vignette garch_tests_example now imports the data using system.file(), so that the examples can be run easily by the user. • new function makeArimaGnb() for setting up the state space form of ARIMA models. It is a modification of stats::makeARIMA() with Georgi’s method for computation of the stationary part of the initial state covariance matrix. The methods implemented in stats::makeARIMA()are commented out since they are not exported from package stats. • sarima() gets an argument to specify the method to use for the stationary part of P0 (see above). The available options are the ones in makeARIMA() (“Rossignol2011” and “Gardner1980”) plus Georgi’s method (“gnb”). The default is “Rossignol2011”. sarima 0.7.6 • updated Makevars and Makevars.win to deal with a NOTE from recent tightening of checks on CRAN (see https://stat.ethz.ch/pipermail/r-package-devel/2018q3/003030.html). sarima 0.7.5 (not on CRAN) • NEWS becomes NEWS.md and uses markdown syntax. The style is loosely based on http://style.tidyverse.org/news.html). • manually incorporated or noted changes from Jamie’s 0.7.4.9001/2018-08-17. Namely: □ Import package numDeriv (for hessian()). sarima 0.7.4 (not on CRAN) • dealt with ‘valgrind’ warnings (had missed one uninitialised warning). • fixed a bug in prepareSimSarima() - when initial values were not supplied in the stationary case, the initialisation was not correct (thanks to Cameron Doyle for reporting this). sarima 0.7.3 • dealt with ‘valgrind’ warnings. sarima 0.7.2 • this is an emergency release to avoid the package being archived on CRAN due to the archival of a dependency. • the main new feature since the previous release, 0.5-2, of the package is the versatile function sarima(), which provides formula syntax for fitting encompassing SARIMA, ARUMA, XSARIMA, Reg-SARIMA, ARMAX models. Parsimonious multiplicative specifications are supported for the stationary and non-stationary parts of the model, as well as arbitrary unit roots on the unit circle, which can be fixed or estimated. ‘sarima()’ is documented but is still under development. • removed ‘portes’ from Imports - it was not used for some time in ‘sarima’ (it was scheduled from removal from CRAN on 2018-07-30). • removed package ‘FKF’ from Imports, since it has been archived on CRAN. sarima 0.7-0 - 0.7-1 (not on CRAN) Changes in branch ‘models’ • in DESCRIPTION, moved ‘methods’ from DEPENDS to Imports. • various bug fixes and cosolidations. • improvements to the documentation. • returned the stuff the test package ‘testts’ (and removed the latter). testts was not helpful and complicated the workflow. Now the tests for armaQ0 etc are in `sarima. sarima 0.6-6 (not on CRAN) • now can request estimation of components with roots on the unit circle. • in xreg and regx specifications, renamed cs(), B(), p() to .cs(), .B(), .p(), respectively. • further to the above, in xreg and regx specifications `t’ stays as is for now, since it needs more care, but its use is discouraged. • removed sincos() and L() from sarima specifications, use the equivalents .cs() and .B(), respectively. sarima 0.6-4 - 0.6-5 (not on CRAN) • intermediate versions, not useful for back reference (the zip file given is a better place to look for code before 0.6-6). • now on bitbucket as part of sarima_project. The original upload is in sarima_project/Archive/sarima_project_Orig.zip. • wrapping up 0.6-5 before making the changes needed for estimation of unit roots. sarima 0.6-3 (not on CRAN) • support for tanh transformation. • factorisation of MA • Packing up this version before moving stuff that needs ‘:::’ calls elsewhere (e.g. to myRcpp, but haven’t decided on the structure) sarima 0.6-1 - 0.6-2 (not on CRAN) sarima 0.6-0 (not on CRAN) • included some C++ code (using Rcpp/RcppArmadilo) previously tested in my (private) package myRcpp. • removed the internal arima() functions introduced in 0.5-11. • added ss.method = “sarima” to sarima() which uses the new C++ functions to compute the likelihood. Limited testing confirms that this method gives the same results as arima() for models that can be fitted with arima(). • bumping the version number to have a working version in case further improvements mess things up. sarima 0.5-11 (not on CRAN) • temporarily created a number of functions to call functions used internally by arima(), see arima.R. sarima 0.5-9 (not on CRAN) • moved temporarilly FitAR from Imports to Depends, since FitARMA can’t find some functions from FitAR if FitARMA is not attached. (move back to Imports when Ian imports FitAR in FitARMA’s • further work on sarima(), saving before more meddling with the environments of the formulas sarima 0.5-8 (not on CRAN) • added support for KFAS. • fixed parameters and initial values are supported for ARMA specifications (but not for regression parameters yet). • sarima() is still incomplete but is usable. • archiving before a full scale consolidation and clean up, in case that messes things up. sarima 0.5-7 (not on CRAN) • sarima() now fits XARIMAX models, in the case of the second X, using FKF::fkf(). • archiving before starting work on completing the handling of fixed parameters. sarima 0.5-6 (not on CRAN) • some consolidation of sarima(), now supports lagged variables and calls only sarimat(). sarima0() has been removed. the data argument of sarima() is processed properly (incomplete maybe). sarima 0.5-5 (not on CRAN) • sarima() now uses the facilities of package Formula to process the model formulas. sarima 0.5-4 (not on CRAN) • sarima() can now fit time regression. It currently calls sarimat() if there is treg argument and sarima0() otherwise. sarima 0.5-3 (not on CRAN) • model formulas for SARIMA models using package Formula. • usable version of sarima() function but not for publication yet. • packing this version before further work on sarima(). sarima 0.5-2 • plot of acf tests now uses different ‘lty’ so that the confidence limits under iid and garch nulls are visually distinguishable in black and white printouts. • plot of acf tests now accepts argument ‘interval’ to produce rejection limits for levels other than the default 95%. • started to add references to the documentation. • for armaacf() and armaccf_xe() the innovation variance in argument ‘model’ is now called ‘sigma2’ (the old ‘sigmasq’ still works but is deprecated). • a number of corrections and additions to the documentation. • additional examples. sarima 0.5-1 (not on CRAN) • SarimaModel now inherits from VirtualSarimaModel (it was inheriting from VirtualFilterModel. On its own, this is invisible to the user. It didn’t invalidate existing objects either. • new class “VirtualIntegratedModel”. • new functions nUnitRoots() and isStationaryModel. • further streamlining. sarima 0.5-0 (not on CRAN) • exported functions related to Bartlett’s formula (they were there in version 0.4-5, under different names). • substantial work on SARIMA models and their documentation. • increasing the version number before some streamlining of class SarimaModel. sarima 0.4-5 • moved “Lagged” to a separate package, “lagged”. • streamlined acfIidTest() and documented it properly. • new vignette based on example in Chapter 7 of James Proberts’ MMath project. sarima 0.4-3 sarima 0.3-6 (not on CRAN) • white noise tests based on acf and pacf and corresponding plots. • vignette. sarima 0.3-5 (not on CRAN) • revamped “Lagged”: introduced Lagged2d, etc.; mixed Ops, e.g. “Lagged” + “vector”, now work only if “vector” is of length one or multiple of the length of e1@data has the same length as the sarima 0.3-4 (not on CRAN) • removed some old commented out code from sarima.org to reduce clutter. • extensive changes and consolidation. sarima 0.3-3 (not on CRAN) • streamlined SARIMA models and the functions based on old code. Keeping the old code (commented out) for reference. sarima 0.3-2 (not on CRAN) • defined the classes for autocorrelations and similar. • autocorrelations() and similar now have a number of methods. • passes ‘R CMD check’. Most classes have only fake documentation in VirtualMonicFilter-class.Rd. sarima 0.3-0 (not on CRAN) • switched to package PolynomF (from polynom). • new classes for models, including ARMA and SARIMA. • R CMD check passes 9only a WARNING for undocumented objects and S$ methods. sarima 0.2-x (not on CRAN) • added new classes, substantial extension. • renamed sarima.sim() to sim_sarima() sarima 0.1-0 (not on CRAN) • updated and cleaned a bit the old code. sarima 0.0-5 (not on CRAN) • removed argument “eps” from fun.forecast since it is ignored. sarima 0.0-3 (not on CRAN) • sarima.mod now sets class “sarima” for its result. • print method for “sarima” class. sarima 0.0-2 (not on CRAN) • inserted examples from lectures and handouts from past years. sarima 0.0-1 (not on CRAN) • created documentation using the comments in the source code. sarima 0.0-0 (not on CRAN) • turned atssarima.r (written in 2006-2007 for course “Applied time series”) into a package.
{"url":"https://cran.case.edu/web/packages/sarima/news/news.html","timestamp":"2024-11-06T18:53:43Z","content_type":"application/xhtml+xml","content_length":"19134","record_id":"<urn:uuid:eb615668-6059-47c5-97b4-e3cc44a3d3a3>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00706.warc.gz"}
Stochastic Seismic Inversion Applied to Reservoir Characterization Seismic inversion has been used for several decades in the petroleum industry, both for exploration and production purposes. During this time, seismic inversion methods have progressed from the initial recursive inversion method to the present plethora of methods and software packages available to transform band-limited seismic traces to impedance traces. The application of seismic impedance data has also progressed from qualitative assessments of prospects to the quantitative description of reservoir properties necessary for reservoir characterization. Reservoir characterization requires the construction of detailed 3D petrophysical property models contained within a geological framework. Structural interpretation of seismic data has been and continues to be important in the generation of the framework of the reservoir model. Seismic data has been less frequently involved in the generation of the petrophysical parameters that populate the 3D model. There are several reasons for this lack of application of seismic data to property modeling - lack of a 3D dataset (only 2D data available), inability to relate seismic data quantitatively to reservoir properties, and lack of sufficient vertical resolution to generate detailed property models. The pervasive availability and acceptance of 3D data has substantially overcome the first obstacle. Seismic impedance volumes calculated from these 3D datasets can, for many reservoirs, provide a seismic parameter that can be directly related to a reservoir property (porosity, for example), thereby addressing the second problem. The last problem - the lack of sufficient vertical resolution for characterization applications, has been a more difficult problem to solve. Stochastic seismic inversion is one method that can provide the vertical resolution sufficient to generate detailed 3D reservoir property models. Seismic resolution is a function of the frequency of the recorded seismic wavefield and the velocity of the medium. Although some enhanced recovery methods, such as steam floods or fire floods, can alter the velocity of the medium by elevating the temperature of the reservoir, the velocity of the medium is generally considered to be fixed. Despite our best efforts to maximize the frequencies emitted and recorded during seismic acquisition, we often fall short of the resolution desired by geologists and engineers for use in reservoir modeling - vertical seismic resolution is typically one to two orders of magnitude less than log resolution (hundreds to thousands of centimeters versus tens of centimeters or less). Reservoir models constructed from log data alone display an excellent vertical resolution and a poor areal (horizontal) resolution. This is a direct reflection of the resolution characteristics of the log data - high vertical resolution and limited depth of investigation. Seismic data possess the opposite resolution characteristics: high areal resolution (bin size of the 3D survey) and poor vertical resolution (function of the seismic frequency content and velocity of the reservoir). Stochastic seismic inversion provides a unique framework wherein the advantages of seismic and log data can be combined. The stochastic impedance volume derives its areal resolution from the seismic data, but derives the vertical resolution from the log data used in the inversion procedure. The resulting high resolution (both vertical and horizontal) 3D volume is well suited for use in building detailed property models. Figure 1. Schematic depiction of the stochastic inversion process. As outlined in Haas and Dubrule (1994), the log data (sonic and density) are used in the simulation of pseudo-logs at each trace within the seismic survey (figure 1). A synthetic seismogram is generated from the pseudo-impedance log and is compared with the actual seismic trace at that location. The simulation that produces the best match between the synthetic seismogram and the actual seismic trace, as defined by some quantitative measure of goodness of fit, is retained as the inversion solution at that location. The vertical resolution of the simulated log data is determined by the selection of the vertical cell size (determined by the user), not by the frequency content of the seismic data (as determined by Mother Nature). The result of the stochastic seismic inversion is a 3D volume with a seismic-like areal resolution and a log-like vertical resolution that honors both the log data and the seismic data. Figure 2. Input seismic data volume. The enhanced resolution of the stochastic inversion process is evident from a visual examination of the seismic data cubes displayed in figures 2-4. Figure 2 displays the input seismic volume. Figure 3 is the result of a sparse spike estimation and recursive inversion. This inversion result displays a resolution similar to that of the input data. That is to be expected, as the vertical resolution is derived from the seismic data, and is therefore subject to the inherent limitations of seismic resolution. Figure 4 depicts the stochastic inversion cube, which displays a much finer vertical resolution than is observed in the input data or the recursive inversion result (figures 2 and 3). The overall impedance trends can be observed in both inversion results (figures 3 and 4)- layers or regions of high (red color) and low (blue color) impedance; however, the stochastic inversion result (figure 4) has a much better vertical definition within these general impedance trends. Figure 3. Recursive inversion result. Note that vertical resolution is similar to that of the input data. Figure 4. Stochastic inversion result. Note that the vertical resolution is significantly improved from that of the input data. A typical stochastic inversion exhibits a vertical resolution of approximately 1-2 meters. This is generally the same order of magnitude of resolution used in constructing the petrophysical reservoir property models, and the impedance data may be used along with the well log data to generate models of reservoir properties such as porosity. Figures 5 and 6 present porosity models of the interval depicted in the previous figures. The model of figure 5 was calculated using only the log data, whereas the model of figure 6 incorporated both the impedance and porosity information. Again, although overall trends are generally similar, specific details can be significantly different. The influence of the stochastic inversion impedance model (figure 4) on the porosity model shown in figure 6 is clearly discernible, as features observed in the impedance model (figure 4) that are not present in the log-only porosity model (figure 5) are again present in the porosity model derived from both the seismic and the log data (figure 6). Figure 5. Porosity model derived from kriging porosity log data. Figure 6. Porosity model derived from collocated co-kriging of stochastic inversion and porosity log data. Compare with porosity model of figure 5 and the stochastic inversion of figure 4. In carbonate reservoirs, impedance and porosity typically exhibit an inverse relationship (Rafavich, Kendall, and Todd, 1984); in clastic reservoirs, the relationship of porosity and impedance may be complicated by additional factors, such as a lack of impedance contrast between reservoir and non-reservoir rocks or fluid effects. In the case of a clear relationship between porosity and impedance, the seismic impedance volume may be incorporated directly with well data, via geostatistics, neural networks, or other methods, to populate the cells of the petrophysical model, as was done in constructing the porosity model of figure 6. Where the relationship between impedance and porosity is murky, the impedance data may be combined with facies data to derive a facies volume. Petrophysical properties can then be distributed by facies within the overall model framework. In summary, the impedance cube resulting from a stochastic inversion provides sufficient vertical resolution, as well as areal resolution, for use in reservoir characterization. This stochastic inversion impedance volume can be combined with log and engineering data, utilizing deterministic or statistical methods, to arrive at a reservoir model which has incorporated various data types from different disciplines (geology, geophysics, engineering) to arrive at a final, consistent 3D property model for use in reservoir characterization. About the Author(s) Gary Robinson received a B.S. degree in Geology from Stanford University, and an M.S. degree in Geophysics from the University of Houston. He began his career with Mobil in 1978, and has worked for CGG, Elf Aquitaine, Eastern American Energy, and Saudi Aramco prior to joining RC Squared in 1998. His current interest is in the application of seismic data to reservoir characterization and Haas, A., and Dubrule, O., 1994, Geostatistical inversion - a sequential method of stochastic reservoir modeling constrained by seismic data: First Break, 12, 561-569. Rafavich, F., Kendall, C. H. St. C., and Todd, T. P., 1984, The relationship between acoustic properties and the petrographic character of carbonate rocks: Geophysics, 49, 1622-1636.
{"url":"https://csegrecorder.com/articles/view/stochastic-seismic-inversion-applied-to-reservoir-characterization","timestamp":"2024-11-02T05:42:13Z","content_type":"text/html","content_length":"30800","record_id":"<urn:uuid:25d27426-5cb4-45b5-95bd-51ac82759f78>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00075.warc.gz"}
Blondeau Da Silva, S (2020). Limits of Benford’s Law in Experimental Field. International Journal of Applied Mathematics 33(4), pp. 685-695. This work is cited by the following items of the Benford Online Bibliography: Note that this list may be incomplete, and is currently being updated. Please check again at a later date. Blondeau Da Silva, S (2022). An Alternative to the Oversimplifying Benford’s Law in Experimental Fields. Sankhya B. DOI:10.1007/s13571-022-00287-0. Whyman, G (2021). Origin, Alternative Expressions of Newcomb-Benford Law and Deviations of Digit Frequencies. Applied Mathematics 12, pp. 578-586. ISSN/ISBN:2152-7385. DOI:10.4236/ Zenkov, AV (2021). Stylometry and Numerals Usage: Benford’s Law and Beyond. Stats 4(4), pp. 1051-1068. ISSN/ISBN:2571-905X. DOI:10.3390/stats4040060.
{"url":"https://benfordonline.net/references/up/2232","timestamp":"2024-11-11T03:13:26Z","content_type":"application/xhtml+xml","content_length":"6853","record_id":"<urn:uuid:524bd0fd-2d91-4e3c-9cec-e703767627cd>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00499.warc.gz"}
Finding the Size of the Augmented Matrix from the Number of Equations and Variables in the System Question Video: Finding the Size of the Augmented Matrix from the Number of Equations and Variables in the System Mathematics • Third Year of Secondary School Fill in the blan: For the system of equations defined by 2 equations of 3 variables, the size of the augmented matrix is _. Video Transcript Fill in the blank. For the system of equations defined by two equations of three variables, the size of the augmented matrix is blank. A general system of linear equations in the variables 𝑥 one, 𝑥 two up to 𝑥 𝑛 and coefficients 𝑎 𝑖𝑗 looks like this. Then, another way of presenting this information is in what we call an augmented matrix, and that looks like this. This augmented matrix represents the same information just in a different way. We can see that the coefficients of the system of linear equations appear on the left of the augmented matrix. So the number of entries in this augmented matrix varies depending on the number of equations and the number of variables that we have. So, for a system of equations defined by two equations of three variables, a system of two equations of three variables will look like this. We can see that we have two equations and we have three variables. That’s 𝑥 one, 𝑥 two, and 𝑥 three. We can see straightaway that there’s going to be six coefficients here. So our augmented matrix will look like this. We will have our six coefficients on the left, and we’ll have our two constants on the right. That’s 𝑏 one and 𝑏 two. So that is our augmented matrix for a system of two equations and three variables. And we can see that this is a two-by-four matrix because it has two rows and four columns. One mistake that you can make with this kind of question is only considering the order of the coefficient matrix. But remember, the augmented matrix includes the constants too, making it a two-by-four matrix. So remember, the augmented matrix will always have the same number of rows as the number of equations and one more column than the number of variables.
{"url":"https://www.nagwa.com/en/videos/940154543824/","timestamp":"2024-11-12T05:54:57Z","content_type":"text/html","content_length":"249915","record_id":"<urn:uuid:71893825-0f8f-42dc-88fb-01cfe0f57ddc>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00540.warc.gz"}
Repartitions a DataFrame by the given expressions. Spark SQL is a big data processing tool for structured data query and analysis. The number of partitions is equal to spark.sql.shuffle.partitions. Optionally specifies whether to sort the rows in ascending or descending order. ORDER BY. Notice that the songs are being listed in random order, thanks to the DBMS_RANDOM.VALUE function call used by the ORDER BY clause.. A comma-separated list of expressions along with optional parameters sort_direction and nulls_sort_order which are used to sort the rows.. sort_direction. SQL Random function is used to get random rows from the result set. Note that in Spark, when a DataFrame is partitioned by some expression, all the rows for which this expression is equal are on the same partition (but not necessarily vice-versa)! In this article, I will explain the sorting dataframe by using these approaches on multiple columns. Spark SQL allows us to query structured data inside Spark programs, using SQL or a DataFrame API which can be used in Java, Scala, Python and R. To run the streaming computation, developers simply write a batch computation against the DataFrame / Dataset API, and Spark automatically increments the computation to run it in a streaming fashion. We use random function in online exams to display the questions randomly for each student. On SQL Server, you need to use the NEWID function, as illustrated by the following … Here we have given an example of simple random sampling with replacement in pyspark and simple random sampling in pyspark without replacement. This is similar to ORDER BY in SQL Language. Optionally specifies whether to sort the rows in ascending or descending order. Specifies a comma-separated list of expressions along with optional parameters sort_direction and nulls_sort_order which are used to sort the rows.. sort_direction. Spark SQL also gives us the ability to use SQL syntax to sort our dataframe. However, due to the execution of Spark SQL, there are multiple times to write intermediate data to the disk, which reduces the execution efficiency of Spark SQL. The VALUE function in the DBMS_RANDOM package returns a numeric value in the [0, 1) interval with a precision of 38 fractional digits.. SQL Server. Parameters. Distribute By. Simple Random sampling in pyspark is achieved by using sample() Function. Window.orderBy($"Date".desc) After specifying the column name in double quotes, give .desc which will sort in descending order. In Hive, ORDER BY guarantees total ordering of data, but for that, it has to be passed on to a single reducer, which is normally performance-intensive and therefore in strict mode, hive makes it compulsory to use LIMIT with ORDER BY so that reducer doesn’t get overburdened. In order to sort by descending order in Spark DataFrame, we can use desc property of the Column class or desc() sql function. Parameters. Let us check the usage of it in different database. The usage of the SQL SELECT RANDOM is done differently in each database. ORDER BY. In Simple random sampling every individuals are randomly obtained and so the individuals are equally likely to be chosen. Say for example, if we need to order by a column called Date in descending order in the Window function, use the $ symbol before the column name which will enable us to use the asc or desc syntax. ORDER BY. To do this we need to create a temporary table so that we can perform our SQL query: # Raw SQL df.createOrReplaceTempView("df") spark.sql("select Name,Job,Country,salary,seniority from df ORDER BY Job asc").show(truncate=False) Sampling with replacement in pyspark and simple random sampling every individuals are randomly obtained and the! Without replacement ( ) function check the usage of the SQL SELECT random is done differently each... Likely to be chosen, I will explain the sorting dataframe by using sample ( ) function is used sort. Is similar to order by clause be chosen check the usage of it in different database random from. Random order, thanks to the DBMS_RANDOM.VALUE function call used by the order by in SQL Language obtained and the! To display the questions randomly for each student the questions randomly for each student and nulls_sort_order which used. Equally likely to be chosen a comma-separated list of expressions along with optional sort_direction! In each database every individuals are equally likely to be chosen the individuals are equally likely to be chosen will. Used to get random rows from the result set comma-separated list of along. Function is used to get random rows from the result set of expressions along optional... Optionally specifies whether to sort the rows.. sort_direction us the ability to use SQL syntax to the! The rows in ascending or descending order SQL also gives us the ability to use SQL to. In random order, thanks to the DBMS_RANDOM.VALUE function call used by the order in! The ability to use SQL syntax to sort the rows.. sort_direction database! Of simple random sampling with replacement in pyspark without replacement from the result set randomly each! Dbms_Random.Value function call used by the order by in SQL Language to be chosen random function used. By clause of the SQL SELECT random is done differently in each database songs are being in... To order by in SQL Language the individuals are equally likely to be chosen songs are being listed in order. Given an example of simple random sampling in pyspark is achieved by using these approaches on multiple columns in. Is used to get random rows from the result set let us check the usage of it different... Dataframe by using sample ( ) function.. sort_direction the result set the sorting dataframe using... Call used by the order by clause function call used by the order by..... The SQL SELECT random is done differently in each database structured data query and analysis, to. Simple random sampling every individuals are randomly obtained and so the individuals are equally likely to be.. Different database function in online exams to display the questions randomly for each student I will explain the sorting by. Parameters sort_direction and nulls_sort_order which are used to sort the rows.. sort_direction are equally to! ) function done differently in each database with replacement in pyspark without.. Along with optional parameters sort_direction and nulls_sort_order which are used to sort our dataframe the songs being. Achieved by using sample ( ) function to order by clause are used to sort rows! Exams to display the questions randomly for each student data processing tool for structured data query and.! Randomly for each student function is used to get random rows from the result set use random is... Let us check the usage of the SQL SELECT random is done differently in database. Tool for structured data query and analysis call used by the order in! Article, I will explain the sorting dataframe by using sample ( ) function ascending descending. Sql is a big data processing tool for structured data query and analysis the songs are being listed random! Call used by the order by in SQL Language sort the rows.. sort_direction we have an. And simple random sampling in pyspark without replacement with replacement in pyspark is achieved using! Explain the sorting dataframe by using sample ( ) function optionally specifies whether sort! Different database along with optional parameters sort_direction and nulls_sort_order which are used to sort rows! ( ) function be chosen random function in online exams to display the questions for... Spark SQL also gives us the ability to use SQL syntax to sort the rows in ascending descending. Listed in random order, thanks to the DBMS_RANDOM.VALUE function call used by the order by in SQL.... Obtained and so the individuals are randomly obtained and so the individuals are obtained. Expressions along with optional parameters sort_direction and nulls_sort_order which are used to get random rows from the result set it... Data processing tool for structured data query and analysis are randomly obtained and so the are! Differently in each database along with optional parameters sort_direction and nulls_sort_order which used. Get random rows from the result set the result set this is similar to order by in SQL.. ) function let us check the usage of it in different database every individuals are randomly obtained and the... Sample ( ) function SQL also gives us the ability to use SQL syntax to sort rows... Random function in online exams to display the questions randomly for each student, I will explain the sorting by! Random rows from the result set sort_direction and nulls_sort_order which are used to sort the rows in ascending descending... Similar to order by clause use random function spark sql order by random online exams to display the questions randomly for student... In different database comma-separated list of expressions along with optional parameters sort_direction and nulls_sort_order which are used get. Sort_Direction spark sql order by random nulls_sort_order which are used to sort the rows.. sort_direction obtained so. Be chosen are being listed in random order, thanks to the DBMS_RANDOM.VALUE function call used by the by! Equally likely to be chosen sort_direction and nulls_sort_order which are used to random! For structured data query and analysis the rows in ascending or descending order each student SQL random. For structured data query and analysis so the individuals are equally likely to be chosen replacement in pyspark and random. By the order by clause descending order are being listed in random order, to! Is achieved by using sample ( ) function function in online exams to the... By the order by clause list of expressions along with optional parameters sort_direction and nulls_sort_order are! Each database result set replacement in pyspark without replacement display the questions randomly for each student also us! Sql Language specifies a comma-separated list of expressions along with optional parameters sort_direction and nulls_sort_order which are to. By the order by clause approaches on multiple columns sort our dataframe in each database sampling with replacement pyspark... Display the questions randomly for each student DBMS_RANDOM.VALUE function call used by the order by clause database... Questions randomly for each student and simple random sampling with replacement in pyspark achieved. Here we have given an example of simple random sampling in pyspark without replacement simple random with. Article, I will explain the sorting dataframe by using sample ( ) function the songs are being listed random... Approaches on multiple columns pyspark and simple random sampling every individuals are equally likely to be chosen an of! Use SQL syntax to sort the rows in ascending or descending order random is done differently each... Example of simple random sampling with replacement in pyspark and simple random sampling with in... In each database so the individuals are equally likely to be chosen and analysis are used to get random from! To get random rows from the result set and simple random sampling in pyspark is achieved using! A big data processing tool for structured data query and analysis differently in each database the by..., I will explain the sorting dataframe by using sample ( ) function and so the individuals are randomly and! Specifies whether to sort the rows in ascending or descending order our dataframe spark sql order by random in online exams to the! Random rows from the result set order by in SQL Language ( ) function list of expressions along with parameters. Sort_Direction and nulls_sort_order spark sql order by random are used to get random rows from the result.... Use random function is used to sort the rows.. sort_direction and simple random sampling individuals... Example of simple random sampling in pyspark is achieved by using sample )! Differently in each database with optional parameters sort_direction and nulls_sort_order which are used to random... The songs are being listed in random order, thanks to the DBMS_RANDOM.VALUE function used... In ascending or descending order pyspark without replacement by the order by in SQL.. In ascending or descending order us check the usage of it in different database ) function rows from the set... Sql also gives us the ability to use SQL syntax to sort the rows sort_direction! Randomly obtained and so the individuals are randomly obtained and so the individuals are randomly and... Big data processing tool for structured data query and analysis us check the of! Sql is a big data processing tool for structured data query and analysis in SQL Language ascending descending! Random function in online exams to display the questions randomly for each student is achieved by these! Sampling spark sql order by random individuals are randomly obtained and so the individuals are equally likely to be chosen spark SQL gives... ) function order, thanks to the DBMS_RANDOM.VALUE function call used by order. Used by the order by clause are used to sort the rows.. sort_direction order by in SQL.. Article, I will explain the sorting dataframe by using sample ( ).... In SQL Language individuals are randomly obtained and so the individuals are randomly obtained and so the are... The result set usage of the SQL SELECT random is done differently in each database in simple random in..... sort_direction the rows.. sort_direction with optional parameters sort_direction and nulls_sort_order are! The questions randomly for each student optional parameters sort_direction and nulls_sort_order which used! The usage of the SQL SELECT random is done differently in each database are equally to. ( ) function sort our dataframe check the usage of it in different.... By clause by using sample ( ) function questions randomly for each student and so the are.
{"url":"http://cers-deutschland.org/c1491/spark-sql-order-by-random-4f56d3","timestamp":"2024-11-10T05:05:51Z","content_type":"text/html","content_length":"22861","record_id":"<urn:uuid:4d3f5cd5-27c8-4061-a653-12afc8bfdb9d>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00788.warc.gz"}
Matlab代写 | Assignment 3 of MTH2051/3051 - BEST代写 Matlab代写 | Assignment 3 of MTH2051/3051 Please read the following instructions carefully. If in doubt, please raise issues in the discussion forum. i) The submission deadline is 6pm on Tuesday of week 10. ii) Please complete the template files provided through Moodle. Do not change filenames or headers. iii) Your code is not required to check whether a hypothetical user of your code provides reasonable inputs. iv) Symbolic computation and high-level Matlab commands are prohib- ited and result in zero marks for the task in which they were used. v) The marking scheme is full marks for a correct implementation and no marks for an incorrect implementation. vi) Submit a zip-file called firstname_surname_assignment_3.zip con- taining all your Matlab files through Moodle. Assignment 3.1. (polynomial interpolation, 8 marks) In this exercise, you will see how polynomial interpolation behaves in com- putational examples, and why we often use splines instead. a) Complete the file myNewtonCoefficients.m by implementing an al- gorithm that computes the scheme of divided differences. Recall that the divided differences can be organised in a lower triangular matrix as explained in Remark 5.7, and that the Newton coefficients can be obtained as in Theorem 5.9. b) Complete the file myEvaluateNewtonPolynomial.m by implementing the Horner-type algorithm from remark 4.14. c) Run the script wrapper_3_1.m, and relate the behaviour of the in- terpolation polynomials generated by the wrapper to Theorem 5.13. Compute the derivatives of the functions f used as test cases by the wrapper to explain the output you see. (nothing to submit, not marked) Assignment 3.2. (numerical differentiation, 2 marks) In this exercise, you will see the interplay between the theoretical trunca- tion error and the effect of round-o errors on the behaviour of numerical a) Complete the file myForwardDQ.m by implementing the forward differ- ence quotient from Example 6.2. b) Complete the file myCentralDQ.m by implementing the central differ- ence quotient from Example 6.2. c) Run the script wrapper_3_2.m, and explain as much of the output as you can, based on statements from the lecture notes and exercises you have completed. You cannot explain every detail, but most of what you see in the plot. (nothing to submit, not marked) Assignment 3.3. (numerical integration, 6 marks) In this exercise, you will see in a computational example how composite quadrature reduces the quadrature error when the integration interval is divided into more and more subintervals. a) Complete the file myTrapezoidal.m by implementing trapezoidal rule. b) Complete the file mySimpson.m by implementing Simpson rule. c) Complete the file myCompTrapezoidal.m by implementing the compos- ite trapezoidal rule. d) Complete the file myCompSimpson.m by implementing the composite Simpson rule. e) Run the script wrapper_3_3.m, and explain its output by referring to the corresponding error estimates in the lecture notes. (nothing to submit, not marked)
{"url":"https://www.bestdaixie.com/matlab%E4%BB%A3%E5%86%99-assignment-3-of-mth2051-3051/","timestamp":"2024-11-12T16:48:09Z","content_type":"text/html","content_length":"60262","record_id":"<urn:uuid:bf962a3b-11b3-4571-a9fe-d0c339104ff3>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00386.warc.gz"}
Implement Gini Impurity Calculation for a Set of Classes Task: Implement Gini Impurity Calculation Your task is to implement a function that calculates the Gini Impurity for a set of classes. Gini impurity is commonly used in decision tree algorithms to measure the impurity or disorder within a Write a function gini_impurity(y) that takes in a list of class labels y and returns the Gini Impurity rounded to three decimal places. y = [0, 1, 1, 1, 0] # Expected Output: # 0.48 Understanding Gini Impurity Gini impurity is a statistical measurement of the impurity or disorder in a list of elements. It is commonly used in decision tree algorithms to decide the optimal split at tree nodes. It is calculated as follows, where \( p_i \) is the probability of each class - \( \frac{n_i}{n} \): \[ \text{Gini Impurity} = 1 - \sum_{i=1}^{C} p_i^2 \] A Gini impurity of 0 indicates a node where all elements belong to the same class, whereas a Gini impurity of 0.5 indicates maximum impurity, where elements are evenly distributed among each class. This means that a lower impurity implies a more homogeneous distribution of elements, suggesting a good split, as decision trees aim to minimize it at each node. Advantages and Limitations • Computationally efficient • Works for binary and multi-class classification • Biased toward larger classes • May cause overfitting in deep decision trees Example Calculation Suppose we have the set: [0, 1, 1, 1, 0]. The probability of each class is calculated as follows: \[ p_{0} = \frac{2}{5} \quad p_{1} = \frac{3}{5} \] The Gini Impurity is then calculated as follows: \[ \text{Gini Impurity} = 1 - (p_0^2 + p_1^2) = 1 - \left(\left(\frac{2}{5}\right)^2 + \left(\frac{3}{5}\right)^2\right) = 0.48 \] import numpy as np def gini_impurity(y: list[int]) -> float: classes = set(y) n = len(y) gini_impurity = 0 for cls in classes: gini_impurity += (y.count(cls)/n)**2 return round(1-gini_impurity,3) There’s no video solution available yet 😔, but you can be the first to submit one at: GitHub link.
{"url":"https://www.deep-ml.com/problem/Gini%20Impurity","timestamp":"2024-11-06T09:15:03Z","content_type":"text/html","content_length":"27550","record_id":"<urn:uuid:13512227-5749-4051-8de4-7d0e3cb3f613>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00820.warc.gz"}
Mike was saying at CT2013 that if one takes HoTT as a model of weak $\infty$-groupoids, then there are no relations, merely free generators one dimension higher. I expect this point of view is largely model-independent. I can see that $\infty$-groupoids are important, but it’s not obvious to me why they should be more important than strong homotopy types. As for Whitehead’s principle – I think it’s much closer in spirit to well-pointedness (in ETCS) or extensionality (in ZFC) than to LEM or AC, because it’s telling us that a homotopy type is determined by its points, paths, etc. Then again, I don’t know anything about $(\infty, 1)$-toposes, so my understanding of hypercompleteness is probably too naïve. And there’s still the question of why $\infty$-groupoids show up in frameworks that don’t explicitly mention them, such as derivators or model categories. The more I think about the more confused I feel. Here is the story that I try to tell myself: It’s not surprising that the theory of $\infty$-groupoids is the free cocomplete homotopy theory on one object: after all, an $\infty$-groupoid is constructed by transfinite sequence of cell But I’m not completely convinced: just because a system is generated by some constructors doesn’t mean it is freely generated. On the other hand it is somewhat tautological that the free cocomplete category on one object is $Set$, the core observation being that there is always a canonical natural map $\operatorname{lim}_{c : C} \operatorname{colim}_{d : D} Hom(X c, Y d) \to Hom(\operatorname{colim}_{c : C} X c, \operatorname{colim}_{d : D} Y d)$ which happens to be a natural bijection when $X$ and $Y$ are the constant diagrams with value $1$ in $Set$. And then from here it’s perhaps unsurprising that the minimal basic localiser for $Cat$ is intimately connected to the free cocomplete derivator on one object. But that still doesn’t explain why that turns out to be $\infty\text{-Grpd}$, or why that has the properties that it has! I think weak homotopy types vs strong homotopy types is a red herring. “Weak homotopy types” are just an archaic name for $\infty$-groupoids, and those are obviously the important things. And if by “hypercompleteness” you mean the truth of Whitehead’s theorem, that’s a classicality axiom like LEM and AC. I haven’t read enough of PS to say much about Grothendieck, but I can say that my own interest lies in the following questions: 1. Why weak homotopy types, as opposed to strong homotopy types? I think this is especially mysterious from the point of view of derivators, because (at least on the face of it) derivators are an abstraction of what one might call the “local 2-category theory” of a complete and cocomplete 1-category. 2. Put it another way, where does hypercompleteness come from? This surely has something to do with higher induction. 3. Of the shapes for higher category theory, why simplices? Certainly he thought it one of the most important aspects of PS as there was a whole section called ‘the Modelizer story’ if I remember rightly. I knew it was Grothendieck’s; I was just wondering whether you had any insight into his thoughts. (-: The word is Grothendieck’s, not mine. (It’s older than I am!) I came across it in Pursuing Stacks and had always assumed that it was just his word for category with weak equivalences until I looked it up properly. Can you say anything about why this is considered an important enough concept to merit a neologism? Created modelizer. It’s not clear to me exactly what Grothendieck is taking as a property or as a structure in his definitions, but I tried to make a guess. Homotopy type theory teaches us that $\infty$-groupoids are the inevitable result of studying the notion of sameness. So it shouldn’t be at all surprising that they are a fundamental part of By contrast, the notion of strong homotopy type is tied to the notion of topological space, which, while certainly important, is nowhere near as fundamental, and admits of many modifications (sequential space, compactly generated space, pseudotopological space, convergence space, etc.) that would at least potentially change the resulting notion of strong homotopy type. Well-pointedness and extensionality are about internality versus externality. Internally, the logic of a topos is always well-pointed, where its objects are regarded as “sets” and determined by their points. Similarly, the internal logic of an $(\infty,1)$-topos is always well-pointed, where its objects are regarded as $\infty$-groupoids and also determined by their points. For instance, it’s true internally that if $f,g:A\to B$ are two maps such that $f(x)=g(x)$ for all $x:A$, then $f=g$; this is the principle of function extensionality. Whitehead’s principle is instead about whether an $\infty$-groupoid is determined by its truncations, which is a different thing than being determined by its points. I would say that $\infty$-groupoids are the free cocomplete homotopy theory because a “homotopy theory” is a category enriched over $\infty$-groupoids, just as Set is the free cocomplete 1-category because every 1-category is enriched over Set. (Why this also works with derivators is somewhat more mysterious, but I don’t think that issue has any bearing on your questions in #6.) As an unreformed (strong) shape theorist, the importance of strong homotopy type is that it is not the same a weak homotopy type. The interesting objects amongst the spaces include those where the simple idea of sameness of points, given by probing with arcs, fails to give enough information. Perhaps this relates to ’maps from models into’ and maps to some sort of ’comodels’ and ’maps out of’. I agree with Mike that this is not as foundational nor as basic as the weak homotopy type story. On another hand, I have wondered about its relationship to duality. The nice models of weak homotopy types are cofibrantly generated. Some of the models for pro-categories and thus for the other side of the coin, so to speak, are fibrantly generated. Does that suggest some large scale duality? There are quite recent papers (Barnea and Schlank) on variants of the cosmall ‘co-object’ argument… it is horribly tempting to write co-argument!!!! By the way, the fact that the derivator of ∞-groupoids is the free left derivator on one object depends on the fact that every $\infty$-groupoid can be presented as the localization of some 1-category. This is an accident of classical mathematics, due to the axiom of choice and Whitehead’s principle. Hence it is unlikely to be true constructively that derivators characterize the theory of ∞-groupoids, and thus, one might argue, there is unlikely to be a good reason for it to be true. (-: Interesting! I suppose you are referring to Thomason’s proof that the Thomason model structure on $Cat$ is Quillen-equivalent to the standard model structure on $sSet$. But if I understand correctly, Cisinski has proved a closely related general result: For any basic localiser $\mathcal{W}$, there is a class of weak equivalences for $sSet$ such that the resulting left prederivator is equivalent to $Cat [\mathcal{W}^{-1}]$; if $\mathcal{W}$ is accessible, then the weak equivalences come from a Cisinski model structure. (Put together Exemple 4.1.21, Proposition 4.2.7, and Corollaire 4.2.18 in Astérisque 308.) The level of generality suggests that the use of Whitehead’s principle in Thomason’s proof may be inessential. Perhaps it is only needed to show that the weak homotopy equivalences in $Cat$ are the weak equivalences of some model structure. Whitehead’s principle (along with a related classicality axiom that one might call the “set-presentation axiom”, that every type admits a surjection from a set, which follows from a sufficiently strong form of AC) is also necessary to show that simplicial sets present all $\infty$-groupoids. So moving from Cat to sSet changes nothing. Hmmm. What do you mean by $\infty$-groupoid? You must have a specific definition in mind in order to be able to say that simplicial sets don’t always present all of them! In homotopy type theory, “$\infty$-groupoid” is an undefined term, like “set” in ZFC. (We usually pronounce it as “type”, though.) OK, but then I don’t understand what you mean by “show that simplicial sets present all ∞-groupoids”. Where is this comparison happening? We have sets in homotopy type theory, so we can define simplicial sets therein. I see. I was under the impression that simplicial types were problematic to define. General simplicial types are problematic, because they correspond to genuinely homotopy coherent simplicial diagrams involving lots of coherence cells that are troublesome to axiomatize. But for simplicial sets = simplicial 0-truncated types this problem is alleviated. But even if it were still a problem to define simplicial sets in HoTT, we could still say confidently that if we could define them, then they wouldn’t model all types, since the problem of defining simplicial types is due to technical limitations of current type theory, while the failure to model all types is justified by higher topos models, which we understand. This has probably been discussed before in venues I haven’t been, but has anyone tried defining the category $\Delta$ by its universal property? This might not help with the problem of simplicial types in toto, but a partial solution? Do you mean $\Delta_+$ as the free monoidal category on a monoid? That seems to reduce the problem to the no-easier one of defining coherent monoidal structures and monoids… David 22, if you mean the duality of $\Delta$ with the category of intervals people like Lawvere and Joyal like to put some other gadgets into the role of intervals, including internally in some topoi when it can be even more natural. I understood unfortunately just the surface of this discussion. Mike - hmm, ok. Zoran - I meant what Mike was talking about, but your suggestion raises different sorts of questions.
{"url":"https://nforum.ncatlab.org/discussion/5151/modelizer/?Focus=41168","timestamp":"2024-11-10T18:09:45Z","content_type":"application/xhtml+xml","content_length":"60563","record_id":"<urn:uuid:b776d41e-9ed7-4526-b5b9-c74afefd1368>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00661.warc.gz"}
[whatwg] defer on style, depends Boris Zbarsky bzbarsky at MIT.EDU Thu Feb 12 08:41:20 PST 2009 Garrett Smith wrote: >> I would be fine with a way to flag scripts with that information, though >> there is a catch-22: if you flag such a script and it DOES depend on style >> information, then existing UAs will get it "right" and any UA implementing >> the new spec will get it "wrong". > If the page does what it is designed to do, and that the design is > flawed, the page would be broken by design. Designing things to be > broken would be "wrong". My point is that if no UAs implement the new stuff it's easy to make such a mistake, and then UAs that _do_ implement it will show your page not as you intended. In other words, widespread adoption of this in authoring before implementation would actually raise a bar to implementation, since it raises the chances that implementing the feature will break sites. Hence the catch-22 mention above. >>>> Not sure what this example is, or why this is insufficienty served by, >>>> say, >>>> putting the <link> at the end of the HTML (assuming HTML allowed that, of >>>> course). >>> Are you proposing HTML allow that? >> That's one possible solution to the issue of starting stylesheet loads as >> late as possible, yes. It's not a great one a priori, but has the benefit >> of good compat with existing UAs (which you said was a priority for you). > Not that I think you are wrong, but that statement ought to be backed > up by some tests. You mean tests showing that current UAs allow <link rel="stylesheet"> at the end of <body>? > No, just observing that the problem could have been avoided with a > "depends=" attribute, if only such attribute had existed c2000, and > having scripts wait only when depends is set. I like this design. That's nice, but the question is where we can go now given the current situation, not the one that existed 10 years ago. >>> An "independent" attribute on a link says that a browser does not need >>> to wait for that resource to finish loading before it loads other >>> resources (like loading a script). When the parser parses that >>> "independent" attribute, it sets a flag for the browser go ahead and >>> download and run any subsequent script. >> That doesn't work for today's browsers, though, just like flagging the >> script doesn't. Or am I missing something? > You got it. It doesn't work for today's browsers. However, it isn't > guaranteed *not* to work by any standard. In fact, browsers behave > differently on the matter. Could this new feature result in breaking > code in older browsers? No, but my point is that if you're concerned about solutions due to their impact on old browsers, then this solution has the same impact as all the things Ian has proposed... > You say that stylesheets do not block script loading. That may be true > of "Shiretoko" 1.9.1, however, that is not what I see for Firefox 3.0. > The example I posted shows that stylesheets hold up body content from > rendering. If that content contains a script tag, the script will > *not* load *or* run. I can tell you for a fact, having implemented this part of Gecko myself, that a stylesheet will prevent body content from _rendering_, but NOT from being parsed. It will furthermore not prevent scripts from loading, but _will_ prevent them from running. I can point you to the relevant code if desired. > The following example shows this to be true: > http://dhtmlkitchen.com/jstest/block/link-external.html This example demonstrates that pending script execution blocks parsing and hence script loading in Gecko 1.9.0. In fact, it says so right in the example. That's not the same thing as stylesheets blocking script And yes, in Gecko 1.9.1 the speculative parser will likely kick off all the script loads while still waiting for the stylesheet in this case. > The only explanation I have for this behavior is that the browser is > waiting for the stylesheet to complete before it requests the script > in the body. No, it's waiting for the <head> scripts to execute before parsing the body and requesting the script. Those scripts happen to be waiting for the stylesheet, but if you didn't have them there the script in the <body> would be loaded in parallel with the stylesheet. Heck, you don't even need the external script in <head>. The inline one would give you the same behavior. > That is why it would be better for performance to have > that script prefetched Something that UAs are working on anyway, with speculative parsing used to prefetch content. That's happening in at least Webkit and Gecko. >> What I said was true for all scripts. We do not differentiate between >> content in <head> and content in <body> in this regard. > In Shiretoko 3.1, true, but in Firefox 3.1, the bottom script is not > loaded. That has nothing to do with <head> vs <body>, as you could trivially test by moving those scripts around in your document. All that matters is the order of the script tags. >>> However, external resources such as SCRIPT or IMG that appear in the >>> BODY will not get requested by the browser until the page content >>> renders. >> You mean until all the HTML before the tag has been parsed? Or something >> else? There's no dependency between script loading+execution and page >> rendering, in Gecko. Heck, you can run scripts in a display:none iframe, >> with no rendering in sight. > By "all the HTML before the tag has been parsed," I think you mean, > all the HTML before the tag for that IMG or SCRIPT resource. > Next you're saying that visual display is not correlated to > script loading or script execution. > I'm not sure how this is related. You keep talking about "until the page content renders", which is visual > In Shiretoko, a script, even a deferred script, will not run until the > stylesheet is loaded. > Can we make an improvement on that, or to make that improvement > configurable to the page author? I think we can, sure. In fact I'm proposing flagging scripts that don't depend on stylesheets, no? >> Ah, that is one thing that Gecko does do: we don't start _layout_ (as >> opposed to parsing) until all the stylesheets in <head> have loaded. > For Firefox 3.0, IMG and SCRIPT that are part of the body are fetched > around this time. They are not fetched prior. Why not? Because you have <script>s after your stylesheets, not just stylesheets. Really, controlled experiments are hard. You have to hold all but one variable constant. > In that case, the link would not block layout. Yes, which is why you get a performance hit when it loads. But I thought we were talking about stylesheets that don't "really" affect layout (late-loading stylesheets, which you wanted). > In "Shiretoko" 1.9.1b3pre, a deferred script waits for all stylesheets > to load before running. However, this is not guaranteed behavior in > any standard. True. At least not yet. I suspect it's pretty much required for web compat, though, which is why it's implemented that way. Or at the very least the scripts need to wait for the stylesheets that came before them. >>> Question: When the stylesheet is eventually applied, could the reflow >>> be optimized for performance? >> Not easily, no. Or rather, the reflow already is; the style data >> recomputation is the hard part. > What would make it easier? I'd really like to know how to design my > pages so that they are faster and more responsive. Well, one option is to stop worrying about micromanaging the load order and assume that speculative parsing will solve your problems.... will it? > A deferred stylesheet being requested by the browser would not be a > problem. Mozilla already makes predictive fetches for links. However, > if a deferred stylesheet is fetched during loading, should that > stylesheet (and rules.length, etc) be accessible via script in that > time? Should the deferred link fire a load event after the request > completes? In my opinion, prefetching should have no effect on what the DOM sees. It should just make it look like the network load took a lot less time than it would otherwise. In other words, you load the stylesheet, parse it, whatever, but don't hook it up to the document until you "really" parse the <link> tag. More information about the whatwg mailing list
{"url":"https://lists.whatwg.org/pipermail/whatwg-whatwg.org/2009-February/060796.html","timestamp":"2024-11-05T13:42:58Z","content_type":"text/html","content_length":"12426","record_id":"<urn:uuid:40c74f55-c2d6-4061-ac13-edf1e2a43a30>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00736.warc.gz"}
0x29A is an esoteric programming language created by David Lewis in 2004. It uses a mixture of imperative and functional programming, forming a new paradigm the creator calls dysfunctional 0x29A has a memory consisting of a one-byte register, which is initially zero, two variables called a and b that can contain functions, and a stack of functions, which is initially empty. The two variables are not accessible directly, but are used as a temporary store while popping and pushing functions in the stack. An empty stack behaves as if it contains the identity function ((sk)s). Stack-pushing commands 0x29A contains the following six commands which push the functions of the same name onto the stack: Looping commands 0x29A features two looping commands, represented by the left and right square brackets. When a left square bracket is encountered, then if the register's value is zero, program flow continues at the matching right square bracket (the program halts if there is no matching right square bracket); otherwise program flow continues at the next instruction. When a right square bracket is encountered, then if the register's value is nonzero, program flow continues at the matching left square bracket (or at the beginning of the program if there is no matching left square bracket); otherwise program flow continues at the next instruction. Stack-rearranging commands 0x29A contains two commands that rearrange the stack and allow the creation of complex functions. The command represented by the percentage sign pops two functions from the top of the stack, storing them in the variables a and b, and then pushes them back onto the stack in reverse. The command represented by a tilde pops two functions from the top of the stack, storing them in the variables a and b, and the pushes (ba) onto the stack. Function evaluation After each command is executed, the function at the top of the stack is evaluated if this is possible. The following rules are used when evaluating functions: • (((sx)y)z) becomes ((xz)(yz)) • ((kx)y) becomes x • ((.x)y) becomes x, the value of the register is printed (as an ASCII character), and the register is set to 0 • ((,x)y) becomes x, and the register is set to the ASCII value of a character from the input • ((+x)y) becomes x, and the register is incremented • ((-x)y) becomes x, and the register is decremented Computational class In the evaluation rule for s, it is not specified that either xz or yz are evaluated. This may be a bug. If it is not, Turing completeness is not obvious since a working combinator calculus can no longer be trivially embedded into the language. Indeed function evaluation alone is then decidable and so not Turing complete. By checking about a hundred cases, we can show that every function either halts or reaches the recognizable growing where f^nx=f(f(...(x)...)) and k stands for any of k.,+-. Note that s^2kxy=xy so this is just sii(sii) in disguise. Nevertheless, even with this restriction there are enough ingredients to compile Brainfuck into the language. The basic idea is to represent a brainfuck tape as two functions, one for the part to the left of the pointer and another for the part to the right, while the cell at the pointer is kept in the register. Each half-tape function, when applied to k, will increment the register by the saved amount, and then return the function for a shifted half-tape with the value popped off. To shift a value of 3, say, onto a function f, is to replace f by s(s+)(s(s+)(s(s+)(k f))) This can be achieved with the commands k%~ ss+~~%~ ss+~~%~ ss+~~%~ To shift the register to a function, use k%~ [ss+~~%~ -%~k~] Finally, the output operator in 0x29A is destructive whereas the output operator in brainfuck is not, so one needs to clone the value before handing it to the output operator. This is done by making the functions twice (once in top-of-stack, once in next-to-top-of-stack), thereby destroying the register, running it once, restoring the register, running the output and running the second copy. The translation rules are thus as follows: + -> +%~k~ - -> -%~k~ , -> ,%~k~ . -> k%~ kk~ [ss+~~%~ % ss+~~%~ % -%~k~] k~ .%~k~ ~ < -> k%~ [ss+~~%~ -%~k~] % k~ % > -> % k%~ [ss+~~%~ ~%~k~] % k~ [ -> [ ] -> ] External resources
{"url":"http://esolangs.org/wiki/0x29A","timestamp":"2024-11-14T17:06:27Z","content_type":"text/html","content_length":"24289","record_id":"<urn:uuid:617caed7-b2b7-4365-9321-a8b7712ff5da>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00615.warc.gz"}
Notable Properties of Specific Numbers First page . . . Back to page 12 . . . Forward to page 14 . . . Last page (page 25) The smallest (positive) integer whose name (in English) has the five vowels A,E,I,O,U in any order: "OnE thoUsAnd (and) fIve". When answering problems like this we don't count the letters in "and" because not all people agree on when to include an "and". See also 34, 84, 1025, 1084, 5000, and 1000000000008020. According to my classical sequence generator, 1011 is the next number after my "favourite" numbers 3, 7, 27 and 143. The formula it finds is: A0 = -1; AN+1 = (2N-1)(AN+1) + 3 (sequence MCS8041809, with its own whimsical page here). Along with its successor 9111, the terms in the sequence share common factors and other properties with 3, 7, 27 and 143. This serves as an example of how easy it is to find a sequence formula to match an arbitrary set of numbers. See also 695, 715, and 1011. The number of ways to form a set of yes-or-no questions that can be used in a "20 questions"-like game where the questioner knows that the item to be guessed is one of a specific pre-defined set of 8 items. For 8 items you always need at least 3 questions and might need as many as 7. This problem is the subject of OEIS sequence A5646, and I have written a thorough discussion of it here. This is 210, and it's pretty close to 1000=103. As a result, and because of the long-established habit of grouping digits in threes (see 1000), the computer industry has adopted the international prefixes kilo, mega, etc. to refer to quantities that are actually powers of 1024 almost as if they were actually powers of 1000. This sometimes causes practical problems and confusion which can be avoided by using the official prefixes kibi, mebi, gibi, etc. (There is a table here and more about SI prefixes here.) If for some reason you decide to learn the powers of 2, the closeness of 1024 and 1000 makes it a little easier. It's also handy that 1024 is the 10th power of 2 since 10 is the base of our number system, so (for example) 27, 217, 227, 237 and so on all start with roughly the same digits. Here is a table of powers of 2: 2^0 = 1 2^10 = 1024 2^20 = 1048576 2^1 = 2 2^11 = 2048 2^21 = 2097152 2^2 = 4 2^12 = 4096 2^22 = 4194288 2^3 = 8 2^13 = 8192 2^23 = 8388608 2^4 = 16 2^14 = 16384 2^24 = 16777216 2^5 = 32 2^15 = 32768 2^25 = 33554432 2^6 = 64 2^16 = 65536 2^26 = 67108864 2^7 = 128 2^17 = 131072 2^27 = 134217728 2^8 = 256 2^18 = 262144 2^28 = 268435456 2^9 = 512 2^19 = 524288 etc. Given the importance of the powers of 2 for such things as the size of the memory chips in your computer, it's actually pretty cool that we have this coincidental closeness to a power of 10, and a popular power of 10 to boot. It didn't have to be that way. The only other powers of numbers that come anywhere close to a power of 10 are things like 223=10648 and 3162=99856, and those aren't too useful because there isn't much in real life that involves powers of 22 or 316. The smallest (positive) integer whose name (in English) has the vowels A,E,I,O,U, plus Y, in any order: "OnE thoUsAnd (and) twentY-fIve". When answering problems like this we don't count the letters in "and" because not all people agree on when to include an "and". See also 34, 84, 1005, 1084, 5000, and 1000000000008020. A fairly highly-composite number (but not a record-setter) which appears as a unit of division in the Talmudic Hebrew time system. See also 108. The smallest (positive) integer whose name (in English) has the five vowels A,E,I,O,U, in order: "one thousAnd (and) EIghty-fOUr". When answering problems like this we don't count the letters in "and" because not all people agree on when to include an "and". See also 34, 84, 1005, 1025, 5000, and 1000000000008020. Also, if you take any 3-digit number that is not the same in reverse (e.g. 143), take the difference between it and its reversal (341-143=198) then add that to its reversal (198+981) you always get 1089. Before this fact became well-known, it was often untilised by magicians as a "forcing" or "Macigian's choice" technique, to take a number freely chosen by a spectator and make it fit the requirements of an illusion. 1089 is close approximation to the redshift of the cosmic microwave background radiation The first of the Wieferich primes, which are the primes p such that p2 divides 2(p-1) - 1. Only two are known (OEIS sequence A1220), the other being 3511, and a next one, if any, must be greater than The following is from a posting to the Math Fun mailing list by R. W. Gosper, 2009 Dec 03: 1100 in binary (base 2) is 10001001100, which has an even number of 1's. For this reason 1100 is called "evil" (as contrasted with odious numbers). But in decimal 1100 has two 1's, so it's "evil" in decimal as well. 1101 in binary (base 2) is 10001001101, which has an odd number of 1's. For this reason 1101 is called "odious" (as contrasted with evil numbers). But in decimal 1101 has three 1's, so it's "odious" in decimal as well. Appears in Star Wars as a reference to George Lucas' earlier movie THX 1138. See also 2187. According to some (e.g. project 1138), the number of benefits and protections afforded to married couples under United States federal laws. Along with 1210, forms an "amicable pair" like 220 and 284. Like 204, its square is also triangular — see 41616 for more. The smallest "self-describing" number: Its digits comprise 1 zero, 2 ones, 1 two, and 0 threes. See 6210001000 for more. 1225 = 352 = 49×50/2 is both square and triangular. This is closely related to the fact that 1225 is the sum of the first five odd cubes: 1 + 8 + 27 + 125 + 729. See also 204 and 1296. This number has 36 distinct divisors: 1, 2, 3, 4, 5, 6, 7, 9, 10, 12, 14, 15, 18, 20, 21, 28, 30, 35, 36, 42, 45, 60, 63, 70, 84, 90, 105, 126, 140, 180, 210, 252, 315, 420, 630, and 1260. No smaller number has so many divisors, so 1260 is a divisibility record-setter. The fairly popular numbers 8, 24 and 40 are missing from this list; 1260 is the largest such record-setter not divisible by 8. In the Rubik's Cube group (see 43252003274489856000) it is possible to make a set of moves that scrambles the cube, and for which the same set of moves must be repeated a total of 1260 times to get back to the initial position. In group theory terminology that set of moves is an "element" of the Rubik's Cube group, and its "order" is 1260. There are no elements with order greater than 1260. Numbers with lots of divisors were popular in ancient civilisations; well-known examples include 12, 24, 60 and 360. 1260 is not as famous but it does appear in the Bible, both explicitly in Rev 12:6 and implicitly with the phrase "a time, and times, and half a time"12 in Rev 12:14. "A time, and times, and half a time" is generally taken to be a reference to a period of 31/2 years12,13, and an allusion to similar phrases in Dan 7:25 and Dan 12:711. The "years" in question are probably the Babylonian "lunar years" of 360 days, so 31/2 years is 1260 days — the same time period referred to explicitly in Rev 12:6, and described as "42 months" (42×30 days) in Rev 11:2 and Rev 13:5. All of these are meant to refer to a "prophetic", not literal, period of time in which "1 day" in the text represents one year in real life (this is kind of like the Hindu manvantara, see 4320000000). In other words, the period being referred to is 1260 years, or perhaps 1260 Babylonian lunar years which would be 453600 days. Round numbers with more divisors were more likely to show up in ancient writings, partly because of the difficulty of manipulating "odd" numbers accurately. 1260 might have been more appealing to the ancients because 1260 days is exactly 180 weeks (180 being another record-setter), or perhaps because 1260 is 12×100+60. There are many other numbers involved in apocalyptic predictions, such as 1290, 1335 and 2300 (intervals of years and of days mentioned in the book of Daniel); 945000=360×(1290+1335); etc. Mentioned in the Old Testament apocalyptic book Daniel (see 1260). The sum of the first 8 cubes, and like all such sums, also the square of the 8th triangular number. Since that is 362, it is also a 4th power. See also 216 and 1225. This is the number of distinct positive integers that can be represented in Roman Numerals in six characters or less. To learn why this would be of particular interest to the American Kennel Club, see Matt Parker's video. 1331 = 113, a fact that holds in all bases higher than 3 (where you get "2101"). See 121. Mentioned in the Old Testament apocalyptic book Daniel: "Blessed is he that waiteth, and cometh to the thousand three hundred and five and thirty days." (KJV). See also 1260. 1353 = 13+14+15+...+52+53, a member of sequence A186074 and its subset {15, 1353, 133533, 13335333, ...} all of which have the same property. See 429 for more. (Contributed by Matt Goers) The number of ways to pick 5 numbers from 1 to 25 (with no two the same) and have them add up to 65. The number of junctions between the neurons and muscles in the nematode worm C. elegans. See 959. 1460 is 365×4, and therefore the number of years that have to pass (in a Julian calendar) to accumulate an entire years' worth of leap days. This was known to the Egyptians as the Sothic cycle, the number of 365-day calendar years that have to pass for the calendar to once again agree with the seasons. The Egyptians had a simple 365-day calendar for civil purposes (being close enough to the equator that the seasons didn't affect the length of their day too much). However there was one critical seasonal cycle, the flooding of the Nile (which comes in August, and derives from seasonal rainfall patterns in Sudan, Ethiopia, etc.). This generally came at a time close to the "heliacal rising" of Sirius, which is when Sirius first becomes visible in the early morning just before dawn. (Other sources I found say that the rising of Orion at sunset coincided with the flooding of the Nile.) Since the rising of the star is much less variable than the timing of the flood, it can be used to try to calibrate a calendar for purposes of determining how often to have leap years. The 365-day Egyptian calendar continued in unbroken use for two (or perhaps three) entire cycles. After a cycle had been completed, by looking back at the written record of Pharaohs' reigns, they could calculate the length of the cycle. This knowledge was used to set up the 365.25-day Roman calendar. On the advice of the Alexandrian astronomer Solsigenes, the approximation 1460 = 365×4 was used by Julius Caesar for the Julian calendar. The Egyptian calendar (with its original month names and 5-day intercalary month coinciding with the beginning of the Nile flood period) remained in use and was also modified to a 365.25-day cycle; it is still used to this day (as the Coptic Orthodox Church's liturgical calendar). Over the long term, the cycle would be a larger number of years, closer to the true average of 1508.0833. But because Sirius is not on the Ecliptic, the precession of the Earth's axis causes it to rise at a rate that varies over the period of a 25800-year precession cycle. The number of days in a "quadrennium" or "Olympiad" of 4 Julian years: 365×4+1. Divide the length of a tropical year (in this particular case, the 365.242189670 value) by its fractional part: 365.24218967 / 0.24218967 = 1508.08327... This gives the number of years for a 365-day calendar to "drift all the way around" and once again align with the seasons. It is also the average length of the sothic cycle over an entire 25800-year precession cycle. See 1460. In the year 1514, German artist Albrecht Durer created one of his better-known works, Melencolia I, a still-life containing many symbols of alchemy including a 4×4 associative magic square containing the numbers 15 and 14 in adjacent positions. The full 4×4 square appearing in Melencolia is: Each row (such as 16+3+2+13), column and diagonal adds up to 34. It has to be 34 because the sum of all 16 of the numbers must be the sum of four rows, and also the sum of all 16 numbers, thus we have 4M=16×17/2 where M is the magic sum, thus M=34. Also, the four numbers in any one quadrant (for example, the upper-left quadrant, 16+3+5+10) add to 34. Because this magic square is "associative", there are a lot of other sets of 4 squares that add to 34, such as the four corners, the four in the centre, and other symmetrical patterns such as 3+2+15+14, 16+6+11+1, etc. Any pattern shaped like one of the following, including rotations and reflections (a total of 28 patterns) will work: 10 4 11 4 12 1 14 2 15 2 20 2 X X . . X . X . X . . X X . . . X . . . . X X . . . . . . . . . . . . . . X . . . . X . . . . . . . . . . . . . . . . . . . X . . X . . . . . . . . X X . X . X X . . X . . . X . . . X . X X . 22 2 23 4 24 4 25 2 60 1 . X . . . X . . . X . . . X . . . . . . X . . . . X . . . . X . . . . X . X X . . . . X . . X . . X . . X . . . . X X . . . X . . . X . . . X . . . X . . . . . The year of the beginning of the Windows NT time counting system (specifically, midnight GMT at the beginning of Jan 1 1601, using the Gregorian calendar and a "proleptic" definition of "GMT"). See 134774 and 11644473600. The length of a mile in meters. This is exact, since the inch is defined as precisely 2.54 cm. See also 1.609344, 63360 and 1609344. The smallest 4-digit Armstrong or "narcissistic" number: 1634 = 14+64+34+44. See also Number of years from the Creation to the Flood in the Hebrew tradition (and Judeo-Christian Bible). See 86400 for more details. In the classical Chinese 3×3 magic square, the rows can be treated as 3-digit numbers: 492+357+816 = 294+753+618 = 834+159+672 = 438+951+276 = 1665. This works because of the symmetry of the arrangement of the digits and 1665 is the repunit 111 times the magic sum 15. In 1974 a radio message was sent from the Arecibo observatory towards star cluster M13 (a group of stars about 25,100 light years away) containing 1679 bits of data modulated by frequency shift keying. The number 1679 was chosen because it is a semiprime — a receiver of the message would presumably notice this, then try to arrange the data into a 23×73 or 73×23 rectangle to look for a This is 8×7×6×5 = 8! / 4!. Numbers of the form 2N!/N! are called quadruple factorials. The quadruple factorials are: 1, 2, 12, 120, 1680, 30240, 665280, 17297280, ... (Sloane's A1813). Confusing, but there is another definition of "quaruple factorial" that is more like the "double factorials", in which you form a product by starting with some integer N and subtracting 4 each time: 1680 = 14×10×6×2. Using this definition, all of the 2N!/N! numbers are included, plus three intermediate values between each one: 1, 1, 2, 3, 4, 5, 12, 21, 32, 45, 120, 231, 384, 585, 1680, 3465, 6144, 9945, 30240, 65835, 122880, ... (Sloane's A7662). See also 105. (I happen to think that this name "quaruple factorial" is even worse than "double factorial", but that's just me.) The product of three consecutive integers (a 3-d oblong number), and also one of the central numbers in Pascal's Triangle, namely the 7th term in row 13. This coincidence happens because 7! = 5040 = 10×9×8×7 (and therefore, 13!/(7!×6!) = 13!/10!); see 3628800 for more on this. This is 123, the number of cubic inches in a cubic foot. It is sometimes called a great gross53. See also 1729 and 3456. The Hardy-Ramanujan "Taxicab number", made famous by a story involving the two early 20th-century mathematicians. As the story goes, Hardy commented to Ramanujan that he had just ridden in taxicab number 1729 and that the number had no particular significance that he (Hardy) knew of. Ramanujan replied that 1729 did indeed have a special property: it is the smallest number that can be expressed as the sum of two cubes in two different ways: 1729 = 123+13 = 103+93. It is thus also a near-miss to Fermat's Last Theorem. This feat seems extraordinary, and most write it off to the fact that Ramanujan had a sort of savant calculation ability. Ramanujan did work on the problem of finding A,B,C such that A3+B3=C3±1; see my article on Sequences Related to the Work of Srinivasa Ramanujan. Even without that, it is easy to see how this particular property of this number could be noticed by considering the following: First of all, once a modest-sized list of cubes has been memorised (something that many folks with a passion for numbers do, see 7776 and the Feynman anecdote below), it is easy to recognise 1729 as the combination of 103=1000 and 93=729, and it is next to 123=1728. Ramanujan also knew that 1729 is the lowest such number. There are a lot of sums to check — if one were to consider an exhaustive search, there are almost a hundred ways to add two cubes and get a total that is less than 1729. The sums are: 1+1=2, 1+8=9, 8+8=16, 1+27=28, 8+27=35, 27+27=54, 1+64=65, ... (OEIS sequence A3325). Somehow you have to mentally look through the "list" to find if any of these occur twice. An important insight, and the type of thing mathematicians like Ramanujan would surely notice, is that any whole number N differs from its cube N3 by a multiple of 6. (This is related in a rather nifty way to the symmetries of the cube, viz. its 3-fold rotational symmetry around the main diagonal combined with a mirror symmetry) For example, 13=0×6+1, 23=1×6+2, 33=4×6+3, 43=10×6+4, 53=20×6+5, (6+0)3=36×6+0, (6+1)3=57×6+1, (6+2)3=85×6+2, and so on. Here are the cubes up to 123 classified with letters a through f for the 6 different values of N3 mod 6: 1 a 8 b 27 c 64 d 125 e 216 f 343 a 512 b 729 c 1000 d 1331 e 1728 f The sum 729+1000 is a c plus a d, a sum of the form 6x+1. To get another 6x+1 sum requires an a+f, a b+e, or a different c+d type sum. So there are far fewer combinations to check for any given sum. To get a sum from two cubes, you have to increase one of the cubes whilst decreasing the other — and the bigger cubes involve bigger increments. So, going from 1000 up to 1331, we can't just reduce the 729 one step down to 512, the sum would be too big. See also 87539319, the sum of two cubes in three ways. See also 50, 65, 635318657, 18426689288, 588522607645608, 336365328016955757248, and 10^{102.1485709110445×1038). Another lovely anecdote about 1729 involves Richard Feynman, the physicist from Cal Tech. Feynman was in a restaurant in Brazil and ended up in a sort of mind-vs-machine contest with an expert abacus operator. After losing to the abacist in addition (handily) and multiplication (a closer race) then coming up dead even in long division, he was challenged to extract the cube root of "any old number", and the number they were given, intentionally chosen at random, turned out to be 1729.03. Feynman remembered that 1728 is 123, so in his head he did the following (which can be derived from the derivative for xk, or a simple inversion of the binomial expansion for (a+b)3): (1728+1)1/3 = 12 (1+1/1728)1/3 = 12 (1 + (3)(1/1728) + ...) ≈ 12 + 1/432 He then began performing the long division 1/432 in his head, and got as far as "12.002.." before being proclaimed the winner.43 Since 12+1/432 = 12..0023148148... and 1729.031/3 = 12.0023837856..., he could have given one more digit of 1/432 and still been correct. 1729 is also the third Carmichael number. Its factors are 7×13×19; J. Chernick proved that any number of the form (6n+1)×(12n+1)×(18n+1) is a Carmichael number provided that all three are prime. The "Chernick numbers" are: 1729, 294409=37×73×109, 56052361, 118901521, 172947529, ... (Sloane's sequence A33502). The first of a set of 4 equally-spaced primes, with no other primes in between: 1741, 1747, 1753, and 1759 are all prime. See also 47, 251, 9843019, 121174811 and 19252884016114523644357039386451. The mass ratio between the proton and electron. Some folks believe it has deeper meaning, or a magic formula, similarly to the situation with the fine-structure constant. The mass ratio between the neutron and electron. It sometimes attracts attention similar to that given to the fine-structure constant. The number of meters in a nautical mile. This unit was originally defined as the length of one minute of latitude along a meridian (or, more approximately, any great circle) on the Earth. This makes the circumference of the Earth 360×60=21600 nautical miles long. This was done for a utilitarian reason — you can take a distance on a chart, measure it against the latitude gridlines on the edge of the map, and that tells you how many nautical miles long it is. The approximation varies with location because the Earth is not a perfect sphere; most of the variation is a gradual decrease in length from pole to equator. See also 1852.216, 5280 and 20003931.4585. The length (in meters) that the nautical mile (see 1852) would need to be in order for the mean meridian (see 20003931.4585) to be exactly 60×180 = 10800 nautical miles (one nautical mile per arc-minute). See also 10800. 1951 is a prime number, and curiously the year in which the record for largest-known prime was broken for the first time in 75 years. The record was broken twice in that year — the first time by Ferrier using a mechanical desk calculator, then a second time by Miller and Wheeler using an electronic computer. 34 2001 = 3×23×29, three distinct primes which are distinct from the three primes that make up 1001. This causes the rather nice digit pattern of the primorial 6469693230=2003001×323×10. Because of its three distinct primes, and along with other nearby numbers like 1998, 2001 is part of John H. Conway's advanced (for numbers up to 712) mental factoring technique. Because the first year was year 1, and 1+2000=2001, 2001 is the 2000th anniversary of the year 1 (in whatever calendar you wish, most recently this happened in the Christian calendars, such as the Gregorian calendar). New Years' revelers made a bigger deal about the year 2000, but I consider 2001 to be the "real" millennium year. A popular apocalyptic date (usually given as the date of the winter solstice, December 21st) because it is close to a hypothesised rollover date for a certain Mayan calendar (see 5126). That calendar was more likely designed to roll over in the year 4772 AD (on October 13th according to 80). For New Year's Eve 2013, Hans Havermann pointed out (to math-fun114) that: ((10/9!)×8!)×7! - 6!×5!/4! + 3!×2! + 1! = (10/362880)×40320×5040 - 720×120/24 + 6×2 + 1 = 5600 - 3600 + 12 + 1 = 2013 2013 was the last of several recent years in which such a simple construction was possible: 10/9!×8!×7!-6!×5!/4!+3!×2!×1! = 2012 10/9!×8!×7!-6!×5!/4!+3!×2!-1! = 2011 (no 2010) 10/9!×8!×7!-6!×5!/4!+3!+2!+1! = 2009 10/9!×8!×7!-6!×5!/4!+3!+2!×1! = 2008 10/9!×8!×7!-6!×5!/4!+3!+2!-1! = 2007 (gap) 10/9!×8!×7!-6!×5!/4!+3!/2!+1! = 2004 10/9!×8!×7!-6!×5!/4!+3!/2!×1! = 2003 10/9!×8!×7!-6!×5!/4!+3!/2!-1! = 2002 A "self-describing" number, like 1210 and 21200; see 6210001000 for more. The first Mersenne number that is not prime; see 496 and 8384512. The largest known power of 2 whose digits are all even. Higher powers of 2 have been checked at least as far as 27725895275426. One does not need to check all of the digits because (for example) you can check just the last 20 digits and the odds are only 1 in 220 that all will be even. The product of 27=33 and 81=34, and containing the same four digits (see also 8127). The house number of the childhood home of Martin Gardner, the longtime writer of the Mathematical Games column of Scientific American responsible for so many amateur mathematicians' introduction to popular mathematics. In one of his columns his character Dr. Matrix described many properties of 2187: it is the 297th lucky number; add its digits in reverse (7812) and get 2187+7812=9999; its digits can also be arranged to make 1728 and 8127, etc. George Lucas was inspired by Arthur Lipsett's film mashup 21-87. "The force" comes from a line by Roman Kroitor in this movie. In a tribute reference, when Han Solo and Luke rescue princess Leia in Star Wars, they find her in cell 2187 (within cell block 1138). 2202 yojana is a distance that figures in a now-famous comment by Sayana a minister to King Bukka I of 14th-century India, in his commentary on the Rigveda. The full quote is "[it is] remembered that the sun traverses 2,202 yojanas in half a nimesha". A yojana is a unit of distance, whose definition varies throughout history and by context. It is agreed that a yojana is 4 kro'sha, but the definition of kro'sha can be either 1000 or 2000 dhanu. As a result, a yojana is either 4.5 or 9 miles (other sources say 5, "about 8 to 10", and 40). But if the figure of 9 miles is used, the speed of the sun is 39636 miles per nimesha. A nimesha is 1/405000 of a day, so converting to standard units, we have 299128 kilometers per second — very close to the speed of light. This is usually taken as being much more significant than mere coincidence would suggest, with the implication that Sayana was actually speaking of the speed of light, not of the Sun. However, it was common to estimate the speed of the Sun in its daily "orbit" in the old geocentric cosmology model, and some Hindu/Indian estimates are comparable. For example, in the Vayu Purana, chapter 50, the Sun is said to move 3150000 yojana in 1/30 of a day, or about 16000 kilometers per second. See also 405000 A member of the Lucas-Lehmer-like sequence 3, 7, 47, 2207, 4870847, ...; see 47 for more. This number appears in the film Monsters, Inc. as the code number for an incident in which a monster has been "contaminated" with something from a child's bedroom (scarer George has a child's sock on his back upon returning to the factory floor). It is an inside joke from the Pixar team that made the movie, but the exact explanation is uncertain. It could refer to US law, title 18, section 2319 which details the penalties for criminal infringement of copyright; or the "23" and "19" could represent the letters W and S respectively, initials of "white sock" or even "Wazowski" and "Sullivan" (the movie's main characters). This is a prime number whose digits are the four 1-digit primes, in ascending order. See also 3257, 4567, and 7.232325232...×103119. This number appears in a widely-circulated, and wildly inaccurate, email warning about long-distance phone charges. While there is still a bit of truth to the warnings, the central figure of "$2425 per minute" (meaning 2425 U.S. dollars) has always been wrong. The initial "$24" happens to be the Hexadecimal code for the '$' character in the ASCII character set. At some point early in this particular chain email, the dangerously high telephone toll rate was a much more realistic (and probably accurate) $25/minute. But an email software with improper handling of special characters turned the '$' character into a "hexadecimal escape", which is $ followed by the ASCII code in base 16: $24. Combined with the original number 25, this resulted in $2425, a much more frightening figure which became the primary reason for the email becoming an addictive chain letter.
{"url":"https://www.mrob.com/pub/math/numbers-13.html","timestamp":"2024-11-04T13:58:14Z","content_type":"text/html","content_length":"61558","record_id":"<urn:uuid:8f6a0768-a8dd-4cc2-9cd4-94814f5344af>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00216.warc.gz"}
Fast information spreading in graphs with large weak conductance Gathering data from nodes in a network is at the heart of many distributed applications, most notably, while performing a global task. We consider information spreading among n nodes of a network, where each node v has a message m(v) which must be received by all other nodes. The time required for information spreading has been previously upper-bounded with an inverse relationship to the conductance of the underlying communication graph. This implies high running times for graphs with small conductance. The main contribution of this paper is an information spreading algorithm which overcomes communication bottlenecks and thus achieves fast information spreading for a wide class of graphs, despite their small conductance. As a key tool in our study we use the recently defined concept of weak conductance, a generalization of classic graph conductance which measures how well-connected the components of a graph are. Our hybrid algorithm, which alternates between random and deterministic communication phases, exploits the connectivity within components by first applying partial information spreading, after which messages are sent across bottlenecks, thus spreading further throughout the network. This yields substantial improvements over the best known running times of algorithms for information spreading on any graph that has a large weak conductance, from polynomial to poly logarithmic number of rounds. We demonstrate the power of fast information spreading in accomplishing global tasks on the leader election problem, which lies at the core of distributed computing. Our results yield an algorithm for leader election that has a scalable running time on graphs with large weak conductance, improving significantly upon previous results. Original language English Title of host publication Proceedings of the 22nd Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2011 Pages 440-448 Number of pages 9 State Published - 2011 Event 22nd Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2011 - San Francisco, CA, United States Duration: 23 Jan 2011 → 25 Jan 2011 Publication series Name Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms Conference 22nd Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2011 Country/Territory United States City San Francisco, CA Period 23/01/11 → 25/01/11 • Distributed computing • Information spreading • Leader election • Randomized algorithms • Weak conductance All Science Journal Classification (ASJC) codes • Software • General Mathematics Dive into the research topics of 'Fast information spreading in graphs with large weak conductance'. Together they form a unique fingerprint.
{"url":"https://cris.iucc.ac.il/en/publications/fast-information-spreading-in-graphs-with-large-weak-conductance-2","timestamp":"2024-11-03T03:09:48Z","content_type":"text/html","content_length":"52975","record_id":"<urn:uuid:4e0fca44-4d9d-4172-a6e1-40a4d27d77fd>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00395.warc.gz"}
Collins Cambridge International AS & A Level Physics prior understanding Who crash-tests aeroplanes? This empty Cessna 172 aeroplane has just been dropped from a height of 25 m at NASA’s Landing and Impact Research Facility. The purpose of this crash test is to test whether an emergency locator beacon will still work after the impact. If the emergency beacon still works, the aeroplane will be found more quickly after an accident. Damage in impacts like this happens because of the large forces that act during sudden deceleration. You may remember adding and resolving vectors from Chapter 1. You will be finding resultant forces and components of momentum vectors in in this Chapter. You may recall the definitions of velocity and acceleration from Chapter 2 and the use of velocity-time graphs. You may have considered acceleration in free fall. It will also be useful if you recall the concepts of weight, mass, kinetic energy, forces and equilibrium that you may have covered previously. Learning objectives In this chapter you will learn the meaning of and the relationship between mass and weight. You will learn how force, mass and acceleration are related and will discover the concept of momentum. You will learn Newton’s laws of motion and how to apply them. You will also discover how friction and drag forces affect the motion of objects. You will explore whether momentum and kinetic energy are conserved in elastic and inelastic interactions and discover the principle of conservation of momentum. 3.1 Mass and weight (syllabus 3.1.1, 3.1.6) 3.2 Newton’s first and second laws of motion; momentum (syllabus 3.1.2–3.1.5) 3.3 forces in interactions: Newton’s third law (syllabus 3.1.5) 3.4 Motion with resistive forces (syllabus 3.2.1–3.2.3) 3.5 conservation of momentum (syllabus 3.3.1, 3.3.2) 3.6 Momentum and kinetic energy (syllabus 3.3.3, 3.3.4) 266888 A-level Science Physics_CH03.indd 55 9/21/19 11:29 AM 3 Dynamics 3.1 Mass and weight What is mass? Weight and mass are two terms that are often confused. This is because they are not always used correctly in everyday speech. Weight is the force of gravity on an object. Its unit is newtons, N, and it is a vector quantity. Weight acts in the same direction as gravity. On Earth, this is towards the centre of the Earth. Weight of an object is not constant as it varies according to the strength of the gravitational field. The term gravitational field is used to describe any place where the effects of gravity can be detected. Hence, all objects will have weight when they are in a gravitational field. The gravitational force on an object of mass m is F = mg where g is the gravitational field strength in N kg–1, defined as the gravitational force (in N) on a 1 kg mass. The term g is also the acceleration of free fall (or acceleration due to gravity) in m s–2. In Topic 3.2 you will learn that the equation F = ma is a form of Newton’s second law. Comparing F = mg with F = ma shows you that g is the acceleration of the mass m. Since g is constant in a uniform gravitational field, all masses have the same acceleration when they fall vertically in this field, if gravity is the only force acting. Near the Earth’s surface g = 9.81 N kg–1 = 9.81 m s–2. The gravitational force F is the object’s weight, and so can be assigned the term W, and we have W = mg The concept of free fall was introduced in Topic 2.3. All objects in free fall have the same constant acceleration near the Earth’s surface, ignoring the effect of air resistance. What is weight? You will learn more about the forces acting on objects due to gravitational fields in Chapter 13. You will discover how the force between two masses depends on the masses and the separation between Mass can be defined as the quantity of matter in an object. Mass is a scalar quantity which has no direction associated with it. Mass is constant for any object that is at rest or travelling much slower than the speed of light. The unit of mass used in equations is the SI unit kilogrammes, kg. The mass of an object is also a measure of its resistance to change in motion, that is a measure of the object’s inertia. Think of a table tennis ball and a golf ball. Both of these have a diameter of about 40 mm, but their masses are very different. The mass of the table tennis ball is only 2.7 g while the mass of the golf ball is 46 g. The difference in mass is because the table tennis ball only has air inside, while the golf ball is solid rubber inside. Both these balls are placed on a flat surface such as a table top. Now imagine blowing on them. Which one is easier to move? The table tennis ball is easier to move because it has less mass. Link In Topic 3.2 we will show that, by definition, 1 N = 1 kg m s–2. From this we can see that 1 N kg–1 = 1 m s–1. weight in N = mass in kg × acceleration of free fall in m s–2 Consider a 1 kg mass and a 2 kg mass being dropped together. Their weights are the only forces acting on them. The weight of the 1 kg mass is 1 kg × 9.81 m s–2 = 9.81 N and the weight of the 2 kg mass is 2 kg × 9.81 m s–2 = 19.62 N. 9.81 kg ms −2 The acceleration of the 1 kg mass is given by a = F = = 9.81 m s–2 1 kg m 19.62 kg ms −2 The acceleration of the 2 kg mass is given by a = F = = 9.81 m s–2 2 kg m 266888 A-level Science Physics_CH03.indd 56 9/21/19 11:29 AM 3.2 Newton’s first and second laws of motion; momentum Key ideas ➜➜Mass is a property of an object that resists a change in its motion. This resistance to change in motion is called inertia. ➜➜Mass is a scalar quantity with unit kg. ➜➜The gravitational force on an object is its weight. Weight is a vector with unit N. ➜➜Weight is the product of an object’s mass its acceleration of free fall. 1. A block of metal has a mass of 6.7 kg. (a) Calculate the weight of the block on Earth where g = 9.81 m s–2. (b) Calculate the weight of the block on the Moon where g = 1.63 m s–1. 3.2 Newton’s first and second laws of motion; momentum Newton’s first law Bus slows down Imagine you are sitting on a moving bus when the driver suddenly applies the brakes. What happens to you? Link Inertia is the resistance to motion that an object has because of its mass. See Topic 3.1. Figure 3.1 You continue forwards when the bus decelerates. Your tendency to continue forwards is due to your inertia. You keep moving forward when the bus brakes because, while there is a force on the bus to make it stop, no force acts on you. In 1687 Isaac Newton published Mathematical Principles of Natural Philosophy in which he described the motion of objects. His laws of motion were formulated in the book, and they can be used to describe the movement of all objects (although some corrections need to be applied for very small particles, such as electrons, and for objects travelling close to the speed of light). 57 266888 A-level Science Physics_CH03.indd 57 9/21/19 11:29 AM 3 Dynamics Newton’s first law states that: The word normal means perpendicular. The normal contact/ reaction force is the component of the contact/reaction force that acts perpendicular to the surface. An object remains at rest or continues to move with uniform velocity unless acted upon by a resultant force. Think of a person skating in a straight line on horizontal smooth ice. Imagine there is no friction between the ice and the skates, and no air resistance. The skater will continue at the same speed and in the same straight line. In all real scenarios there are several forces acting. When a car is travelling at a constant velocity in a straight line on a horizontal road, the forces on the car can be represented by the four vectors in Figure 3.2. normal contact forces resistive forces Figure 3.2 The forces acting on a car when it is in uniform motion. • First, the car is not moving up or down because the total of the normal contact forces are equal and opposite to its weight. The normal contact forces here come from the ground pushing up on all four tyres. This type of force is often called a ‘normal reaction force’. • The car is not getting faster or slower because the driving force is equal and opposite to the resistive forces. • The car is travelling in a straight line because there are no forces acting to its right or left (into or out of the page). sum of normal contact forces driving forces The resultant force is found by the vector addition of all the forces acting. Recall vector addition from Chapter 1. You will meet resultant forces again in the context of equilibrium in Chapter 4, topic 00. You will also learn more about other conditions for equilibrium in Chapter 4. The car is said to be in uniform motion because the forces on it are balanced. The resultant force on it is zero. The resultant force is the sum of all forces acting driving resistive on an object. As forces are vector quantities, those of equal magnitude acting in force forces opposite directions will have a sum of zero. When all the forces on an object are balanced, an object is said to be in translational equilibrium. It is useful to draw a simple diagram that represents all the forces acting on an object. A diagram that shows only the forces acting on one object, which is weight represented by a dot, is called a free body diagram. Figure 3.3 A free body diagram for the Newton’s first law describes why the car, if travelling at a constant speed in a car in Figure 3.2 straight line, will not speed up, slow down or change direction unless one of the forces becomes larger than the one opposing it. Tip Newton’s first law also describes why the car, if stationary, will not begin to move until We cannot tell the one of the forces becomes larger than the one opposing it. difference between an object being at rest (stationary) and it having uniform velocity by looking at a vector diagram. All of the forces will be equal and opposite, giving a resultant force of zero in both cases. 2. (a) (i) Name the force or forces acting on an object at rest on a flat surface. (ii) Draw a free body diagram to illustrate your answer to part (i). (b) Draw two other free body diagrams showing the forces: (i) when the object is being pushed against resistive forces to start movement (ii) when the object is moving at constant speed with resistive forces. 266888 A-level Science Physics_CH03.indd 58 9/21/19 11:29 AM 3.2 Newton’s first and second laws of motion; momentum Newton’s second law Look again at the free body diagram for the car in equilibrium in Figure 3.4. At equilibrium, the driving force and the resistive forces are equal and opposite, so there is no resultant force forward or backward. If the driving force starts to become larger than the resistive forces then the car will accelerate forwards (to the left in the diagram). If the driving force again becomes equal to the resistive forces, the car will again move at a constant speed. If the resistive forces become larger than the driving force, then the car will decelerate. Newton’s second law describes the relationship between resultant force and the changes in motion that are produced by the resultant force. You may recall from previous courses the relationship between resultant force F, mass m and acceleration a, that is force (in N) = mass (in kg) × acceleration (in m s–2) F = ma This equation is a useful summary of Newton’s second law for an object of constant mass m. The equation is used to define the SI unit of force, the newton, N. One newton is the resultant force that gives a mass of one kilogram an acceleration of one metre per second. 1 N = 1 kg m s–2. At the start of topic 3.1 we thought about why a table tennis ball is easier to blow across a table than a golf ball, and introduced the concept of inertia. Newton’s second law allows us to quantify the effect of inertia. It tells us why mass makes a difference when you try to accelerate an object. A larger mass will require a larger force to accelerate it by the same amount than a smaller mass. Figure 3.4 In order to get the same acceleration, you need to push the car with a much greater force than the shopping trolley. 3. The person in Figure 3.4 pushes the car and the shopping trolley separately on level ground with a force of 520 N on each object. Ignore friction. (a) Calculate the initial acceleration of the car, whose mass is 950 kg. (b) Calculate the initial acceleration of the shopping trolley, whose mass is 34 kg. A fast moving object with a large mass, such as the truck in Figure 3.5, is difficult to stop. Newton’s second law tells us why. Say the truck has a mass of 44 000 kg and a velocity of 25 m s–1. What force is needed to bring the lorry to a stop in a time of 10 s? The average acceleration is change in velocity divided by the time taken, a= 0−v t Figure 3.5 A large truck moving at speed needs a large to stop it. 266888 A-level Science Physics_CH03.indd 59 9/21/19 11:29 AM 3 Dynamics Using F = ma, the magnitude of F is 4.4 × 104 kg × 25m s-1 = 1.1 × 105 N F = mv = 10s t This is a very large force. You can see from the equation F = mv/t above that an object with a smaller mass but a very high speed will also require a very large force to stop it. The product mv is a property of a moving object that we call momentum. Momentum is given the symbol p and is defined as the product of mass and velocity. It is a vector quantity. Its units are the product of the units of mass and velocity, that is kg m s–1. momentum (kg m s−1) = mass (kg) × velocity (m s–1) p = mv The units of momentum can also be written as N s. You saw earlier that, from F = ma, the newton N is equivalent to the base SI units kg m s–2. Hence N s = kg m s–2 × s = kg m s–1 Worked example The truck in Figure 3.5 has a mass of 44 000 kg and a velocity of 25 m s–1. Calculate the momentum of the truck. Answer p = mv = 44 000 kg × 25 m s–1 = 1.1 × 106 kg m s–1 4. (a) A bullet has a mass of 22 g and a velocity of 490 m s–1. Calculate its momentum. (b) The New Horizon space probe, mass 470 kg, left Earth’s orbit with a speed of 16 000 m s–1. Calculate its momentum. Give your answer in standard form. The force required to bring the lorry to a stop depends not only on its momentum, but also on how quickly we want to stop it. To stop it in a shorter time would need a larger force than to stop it in a longer time. Worked example What force would be required to stop the lorry in Figure 3.5 in 5 s rather than in 10 s? 4.4 × 104 kg × 25m s-1 F = ma = mv = = 2.2 × 105 N 5.0s t You can refer to the Experimental Skills section in Topic 2.3 for a reminder of direct proportionality. To stop the lorry in half the time requires double the force. We describe this relationship between force and time as inversely proportional. Inversely proportional means that as one quantity increases the other quantity decreases by the same factor. For example, if y is inversely proportional to x, then as y doubles, x halves. We can write this inverse proportional relationship as y ∝ 1 . In this x case, F ∝ 1 . t The force needed to stop the lorry is directly proportional to its momentum and inversely proportional to the time taken. 5. A tennis ball of mass 65 g is thrown with a velocity of 22 m s–1. A person catches the ball and brings it to rest in a time of 0.25 s. Calculate the force required to bring the ball to rest. 266888 A-level Science Physics_CH03.indd 60 9/21/19 11:29 AM 3.2 Newton’s first and second laws of motion; momentum Experimental skills: Force, mass and acceleration remaining masses cotton thread Apparatus The apparatus is shown in Figure 3.6. Position the pulley (or smooth cylinder) so that the mass hanger can fall vertically through a distance of 1.00 m or more. The task In Figure 3.6, a falling mass is attached to the trolley by a thread that has very little stretch. The trolley is held in place then accelerates from rest when released. If we ignore friction the accelerating force is the same size as the weight of the falling masses. The size of the force can be altered by varying these masses. The average acceleration of the trolley can be measured by timing how long it takes the trolley to move a distance of 1.00 m between two marks, assuming the trolley accelerates uniformly. Questions P1. (a) Sketch a velocityLink time graph for a You may want to uniformly refer back to the accelerating trolley. equations of motion (b) Show how to in Topic 2.2. calculate the acceleration of the trolley from measurements of the time taken to move a distance of 1.00 m from rest. P2. Describe what precautions you should take to prevent injury to yourself and damage to the floor. P3. (a) Describe how you would ensure that the acceleration is as uniform as possible between the start and the 1.00 m mark. (b) Explain the purpose of the trial runs in selecting the starting mass on the hanger. (c) Explain why masses are removed from the trolley and added to the hanger, rather than keeping the mass of the trolley constant and only increasing the mass on the hanger. (d) Explain how to calculate the force acting on the trolley from the mass on the hanger. P4. Draw a suitable table to record the results for this experiment, allowing for repeat measurements and for calculated quantities derived from the raw data. P5. Below are the results recorded by a student in this investigation. The values for acceleration are mean values from repeat measurements. (a) Use these results to plot a graph to determine whether they show that acceleration is directly proportional to force. Draw a straight line of best fit. Newton’s second law describes how a resultant force affects the motion of an object. It is summarised in the form F = ma for an object with constant mass m. The acceleration a is directly proportional to the resultant force F causing the acceleration and so a graph of a against F should be a straight line through the origin. glass rod or pulley Figure 3.6 dynamics trolley falling masses Techniques Record the mass of the trolley. Mark a distance of 1.00 m in the direction of the string from the starting point of the trolley. Carry out some trials to see how long it takes the trolley to cover 1.00 m. Use no extra mass on the trolley for the trial and vary the masses on the hanger. In the main investigation, start with the maximum mass, chosen from the trial results. Place all but one of the masses on the trolley, and only one of the masses on the hanger. In each new run a mass is removed from the trolley and placed on the hanger. (Or if a mass is added to the trolley, it must be taken from the hanger.) F = 0.981 N, a = 0.531 m s–2; F = 1.96 N, a = 1.14 m s–2; F = 2.94 N, a = 1.79 m s–2; F = 3.92 N, a = 2.41 m s–2; F = 4.91 N, a = 3.10 m s–2 (b) Explain whether or not the graph shows that acceleration is directly proportional to force. (c) Suggest why the graph does not pass through the origin. A level Analysis P6. Describe how to adapt the method and the analysis of results, to show that the acceleration of an object is inversely proportional to the mass of the object. 266888 A-level Science Physics_CH03.indd 61 9/21/19 11:29 AM 3 Dynamics General form of Newton’s second law In the case of the truck (Figure 3.5), we looked at reducing its velocity from v to zero, that is, changing its momentum from mv to zero. In general, for a change in momentum Δ(mv) in a time Δt, the equation for the resultant force is ∆p F = ∆(mv ) orF = ∆t ∆t The right-hand side of this equation is rate of change of momentum. The general form of Newton’s second law is stated as: The rate of change of momentum of an object is directly proportional to the resultant force acting on it. The change in momentum occurs in the direction of the resultant force. The equals sign in the equation above comes about from the definition of the newton as an SI unit: 1 N = 1 kg m s–2. 6. A football player kicks a football of mass 0.43 kg. The player’s foot exerts an average force of 130 N on the ball, which leaves the player’s foot with a speed of 31 m s–1. Calculate the time that the ball was in contact with the player’s foot. 7. A box is being transported in a van as shown in Figure 3.8. The box has a mass of 100 kg and the van is travelling at 20 m s–1. Remember that Newton’s second law in the form F = ma is valid only for an object of constant mass m. Then Δ(mv) = m Δv, so ∆(mv ) = ∆t ∆v = ma. m ∆t Force in N = rate of change of momentum in kg m s–2 20 m s–1 box mass 100 kg Figure 3.7 (a) Calculate the momentum of the box. (b) The driver applies the brakes and the van takes 5 s to stop. Calculate the frictional force required to prevent the box from sliding forward. The general form of Newton’s second law can be used in problems where the mass is not constant, in particular when there is a steady flow of mass. Worked example A small jet engine releases 5.0 kg of exhaust per second. The exhaust gas comes out at 65 m s–1. Calculate the force with which the exhaust gases are emitted. Consider the mass of gas that leaves the engine in 1 second. 266888 A-level Science Physics_CH03.indd 62 9/21/19 11:29 AM 3.2 Newton’s first and second laws of motion; momentum The momentum given to this gas is p = mv = 5.0 kg × 65 m s–1 = 325 kg m s–1 The force on the gases to give this momentum change is 325 kg m s −1 ∆p = = 325 N F= 1s ∆t 8. A hosepipe releases water at a rate of 3.0 kg s–1. The water jet comes out of the hose at 4.7 m s–1. Calculate the force with which the water is expelled from the end of the Many injuries in accidents are caused by the rapid change in momentum that happens in a collision. For example, in some road accidents, vehicles can be brought to rest from speeds of around 55 km h–1 in around 0.1 s. Devices such as airbags can increase the time taken for a person in a vehicle to come to a stop. Increasing the time the person takes to stop will decrease the rate at which their momentum changes, and therefore reduce the force on the person. Models of the human body, called crash test dummies, like the one in Figure 3.8, are used to investigate what happens in a collision. The dummies are made to be as similar to a human body as possible. They are used to test the effect an impact has on the body without harming a real person and allow repeated testing to investigate the effect of changing a design variable. They are fitted with sensors that record the changes in force, acceleration and displacement with time. Figure 3.9 is a graph of the acceleration of a crash test dummy’s head during a collision. Rapid changes in momentum Figure 3.8 Crash test dummies have a similar mass to a person and are jointed to move like a human body would during a crash. Here the airbag increases the time taken for the dummy to come to a stop during the collision. 0 –10 –20 –30 Acceleration / g –40 –50 –25 0 25 50 75 100 125 150 175 200 225 250 275 300 Time / ms Figure 3.9 Output from an acceleration sensor placed in the head of a crash test dummy. The car hit a concrete block at 56 km h–1. Worked example Use the information in Figure 3.9 to answer these questions. (a) The car makes contact with the concrete block at time = 0 on the graph. Explain why the head of the dummy does not begin to decelerate until about 12 ms after this time. (b) State the maximum acceleration of the head of the dummy in m s–1. 266888 A-level Science Physics_CH03.indd 63 9/21/19 11:29 AM 3 Dynamics (c) The head of the dummy has a mass of 5 kg. Calculate the maximum magnitude of the force acting on the head of the dummy during the collision. (d) The head of the dummy has a velocity of zero at 180 ms. (i) Calculate the maximum change in momentum of the head of the dummy during the first 180 ms of the collision. (ii) Calculate the average force acting on the head of the dummy to bring it to a stop. Answer (a) The head of the dummy has inertia. It will continue with uniform velocity until acted upon by a force. This will be the restraining force from the airbag. (b) From the graph, the maximum deceleration is about −36 g. 1 g is 9.81 m s–2. −36 × 9.81 m s–2 = −350 m s–2 (c) magnitude of maximum acceleration = 350 m s–2 F = ma = 5 kg × 350 m s–2 = 1800 N or 1.8 kN (d) (i)initial velocity = 56 km h–1 56 km h–1 = 56 000 m h–1 56 000 m h–1 = 56 000 m s–1 = 16 m s–1 3600 initial momentum = mass × velocity = 5 kg × 16 m s–1 = 78 kg m s–1 As final velocity = 0, then 78 kg m s–1 is the maximum change in momentum (ii) force = change in momentum ÷ time taken time taken = 180 ms − 12 ms = 170 ms F = 78 kg m s–1 ÷ 0.17 s = 460 N 9. A car of mass 1350 kg is travelling at 15.4 m s–1 when it collides with a wall. The car comes to a stop in 115 ms. (a) Calculate the change in momentum of the car. (b) Calculate the average force on the car. (c) Cars are designed to have front and rear sections that crumple quite easily on impact. Explain how this can help reduce forces on people in the car during a collision. 10. Fragile items like laptops are often delivered in boxes that contain polystyrene packing. Polystyrene is lightweight and easily deformed. Explain, using the terms momentum and force, how polystyrene packing protects fragile items during delivery. Key ideas ➜➜Newton’s first law: an object remains at rest or continues to move with uniform velocity unless acted upon by a resultant force. This means that an object with zero or constant velocity either has no forces acting on it, or all of the forces acting on it are balanced; this is called translational equilibrium. ➜➜The product of an object’s mass and its velocity is called its momentum. ➜➜Newton’s second law: The rate of change of momentum of an object is proportional to the resultant force acting on it. The change in momentum occurs in the direction of the resultant force. ➜➜The SI unit of force the newton is defined so that the resultant force on an object is equal to its rate of change of momentum. ∆p ∆(mv ) which for constant m becomes F = ma. ➜➜F = = ∆t ∆t 266888 A-level Science Physics_CH03.indd 64 9/21/19 11:29 AM 3.3 Forces in interactions: Newton’s third law 3.3 Forces in interactions: Newton’s third law nl fo When an object exerts a force on a second object, the second object simultaneously exerts a force of equal magnitude and opposite direction on the first object. How does a jet engine work? A jet engine takes air in at the front, burns fuel in this air, and then forces the exhaust gas out at the back at high speed. When the engine forces the exhaust gases out at the back, there is an equal and opposite force of the gases on the engine, pushing it forward. This is an example of Newton’s third law, which states: Imagine sitting on a chair at a very heavy desk. Both the desk and the chair are fitted with small wheels. You try to Figure 3.10 Jet engines like this can propel aeroplanes of 400 000 kg to push the heavy desk away from you. What happens? altitudes of 12 000 m and velocities of 920 km h–1. You will move backwards. The desk will only move a little. Although you are exerting a force on the desk, Newton’s third law states that the Link desk will also be pushing back on you with an equal and opposite force. Your inertia is Refer back to much less than that of the massive desk, so your acceleration is much greater. Newton’s second law Newton’s third law holds for all types of forces: not only for contact forces such as in Topic 3.2. pushes and pulls, but also for non-contact forces such as gravitational, magnetic and electrostatic forces. All forces are the result of interactions between objects. Any interaction involves a Tip pair of forces; each object experiences a force of the same magnitude but in opposite Remember that one directions. The opposing forcesº of the pair are always the same type, for example force of the pair acts gravitational. on one object and B the other, opposite A force acts on the other free body diagram A B object. force from B on A force from A on B Figure 3.11 Equal and opposite forces in a simple interaction 11. Use Newton’s third law to describe the force which is equal in magnitude and opposite in direction to each of these. (a) A heavy book exerting a contact force, equal to its weight, on a table. (b) A sprinter at the start of a race pushing back with a contact force on a starting block. (c) A horse pulling forwards on a heavy cart. (d) A bird pushing air downward with its wings when flying. (e) A ball being pulled down to the Earth’s surface by gravitational force. (f) A bar magnet being used to repel an identical bar magnet. 12. Two ice skaters, A and B, of equal mass are standing opposite each other. Skater A pushes on skater B. Skater B does not push. Neither skater loses their balance. (a) Explain what happens. 65 266888 A-level Science Physics_CH03.indd 65 9/21/19 11:29 AM 3 Dynamics 13. A person is about to step off a small boat and onto a river bank, as shown in Figure 3.12. Figure 3.12 (a) Draw a free body diagram to show the forces on the person and on the boat, due to the person stepping off the boat. (b) Explain, in terms of forces, why it would be safer for the person if the boat was tied to the river bank. 14. A jet engine releases 17 kg of exhaust gas every second. The exhaust gas comes out at 85 m s–1. Calculate the force that propels the engine. Key ideas ➜➜Newton’s third law: When an object exerts a force on a second object, the second object simultaneously exerts a force of equal magnitude and opposite direction on the first object. ➜➜This holds for all types of force including non-contact forces such as gravitational, magnetic and electrostatic forces. 3.4 Motion with resistive forces Friction Friction is a resistive force that acts where surfaces contact each other. When one surface moves over another, friction acts in the opposite direction to the motion. It resists the motion. The magnitude of the friction force depends on the materials that are in contact and on the magnitude of the contact force. 266888 A-level Science Physics_CH03.indd 66 9/21/19 11:29 AM 3.4 Motion with resistive forces According to Newton’s first law of motion, if an object is moving and in translational equilibrium, it will continue to move until some non-zero resultant force acts on it. In practice, for an object moving on a solid surface, this force will usually be friction. Figure 3.13a shows the force of friction on a moving object where friction is the only force acting. The object will decelerate because the friction force is in the opposite direction to the motion. direction of motion See Topic 3.2 for Newton’s first law of motion. direction of motion driving force Figure 3.13 (a) Friction opposes motion (b) The friction force reduces the resultant forward force Now, consider the same object with a driving force and a force of friction acting, as shown in Figure 3.13b. The driving force on the object is greater than the friction force so the resultant force forwards (to the right) causes the object to accelerate. The friction reduces the acceleration that would be produced by the driving force alone. Imagine standing in a swimming pool where more than half of your body is beneath the surface of the water. It is difficult to walk as fast as when you are on land. This is due to the resistive force of the water being greater than that of the air. Gases and liquids are types of fluid. All fluids resist the motion of an object that is moving through the fluid. The resistive force is called drag. Drag forces arise partly because of the density of the fluid − movement of an object through the fluid requires the fluid to be pushed out of the way − and partly due to the viscosity of the fluid. Viscosity is the resistance of a fluid to flow. Sometimes drag is referred to as ‘viscous drag’. The size of the drag force in a particular fluid depends on the speed of the object through the fluid, and on its size and shape: direction of motion direction of motion • The greater the speed of the object, the greater the drag force. • The greater the area of the object’s surface that is presented to the fluid as it moves through the fluid, the greater the drag force. This is because more fluid needs to be pushed out of the way. • The less streamlined an object’s shape, the greater the drag force. A streamlined shape is one that allows steady, non-turbulent flow of the fluid past it. Figure 3.14 (a) A streamlined object with a smaller cross-sectional area experiences less drag when moving through a fluid than (b) a larger object that is not streamlined. Tip Two objects of the same size and shape, moving at the same speed through the same fluid, experience the same drag force. The drag force does not depend on the mass of the object. 266888 A-level Science Physics_CH03.indd 67 9/21/19 11:29 AM 3 Dynamics Motion under gravity with air resistance Air resistance is the drag force that acts on all objects moving through air. In Topic 2.3 we looked at free fall in the absence of air resistance. In practice, air resistance is often not negligible and will significantly affect the motion of a falling object. Consider an object that has just been dropped. At the instant it is released, its velocity will be zero, so there will be no air resistance. The object’s downward velocity increases due to the gravitational force which is equal to mg. This gravitational force is constant provided the gravitational field strength g is constant. As the velocity increases, air resistance will begin to increase. The increasing air resistance means that the resultant downward force, and hence the acceleration, decrease. These eventually become zero. Then the air resistance becomes constant, equal to the gravitational force (Figure 3.15). From this time onward, the forces on the object remain balanced and, according to Newton’s first law, there will be no further acceleration. We call this velocity terminal velocity. velocity = 0 air resistance = 0 velocity increasing air resistance increasing The term free fall is only used for falling objects when air resistance is not present or is ignored, that is when the only force acting on an object is its weight. velocity increasing air resistance increasing velocity constant air resistance = mg mg Figure 3.15 As the velocity of a falling object increases, air resistance also increases up to a maximum equal to the gravitational force mg at which point the object has reached terminal velocity. terminal velocity Figure 3.16 shows a velocity-time graph for a falling object in the presence of air resistance. t Figure 3.16 Velocity-time graph for an object falling in air. 15. A table tennis ball of mass 0.027 kg is dropped vertically off a high bridge. (a) Calculate the air resistance when the table tennis ball reaches terminal velocity. (b) Draw a free body diagram of the table tennis ball at terminal velocity. 16. A brick of mass 3.5 kg is dropped off a cliff. At time t after being dropped, the air resistance on the brick is 0.2 N. (a) Draw a vector diagram to show the magnitude and direction of the forces acting on the brick at time t. (b) Calculate the acceleration of the brick at time t. 266888 A-level Science Physics_CH03.indd 68 9/21/19 11:29 AM What affects the shape of the velocity-time graph? Consider two balls of equal diameter but with very different masses, like a table tennis ball and a golf ball. The mass of the golf ball is approximately 20 times greater than that of the table tennis ball. If both balls are dropped from a high position, the table tennis ball will reach terminal velocity first. The golf ball will continue accelerating for longer, so reach a larger terminal velocity. Therefore, the golf ball will hit the ground first. It may seem from this that air resistance is greater for a smaller mass, but this is not so. The force of air resistance depends on the area and shape of the moving object, and on the velocity of the object, but not on the mass. When both balls are falling at the same velocity, the forces of air resistance on them are equal. The slower drop of the table tennis ball can be explained using the relationship between force, mass and acceleration, F = ma. As the table tennis ball has a much smaller mass than the golf ball, the same resistive force will have a greater effect on its motion, causing a larger deceleration and so reducing its velocity first. If these two balls were dropped in a vacuum, they would both hit the ground at exactly the same time. An object falling through a more viscous fluid would reach terminal velocity in a shorter time because the resistive force is greater and becomes equal to the gravitational force sooner. 3.4 Motion with resistive forces Worked example Three identical steel balls, A, B and C are dropped from a height of 1.0 m. Ball A is dropped in a vacuum; ball B is dropped in air; ball C is dropped in water. Sketch graphs for the motion of A, B and C on the same axes to show the variation of: (a) velocity with time for each ball (b) acceleration with time for each ball. Answer (a) No resistive force acts on ball A, and so its velocity will increase uniformly with time. Ball B is subject to air resistance, which will cause the velocity to increase less rapidly with time. It will not reach terminal velocity from a height of 1.0 m. Ball C has a greater resistive force on it than B as the drag force from water is greater than that of air, so its velocity will increase even less rapidly. It will probably not reach terminal velocity from a height of 1.0 m. (b) Figure 3.17 (a) Sketch graph of velocity with time, (b) sketch graph of acceleration with time (b) The acceleration of ball A will not change with time. The acceleration of ball B will decrease slightly with time. The acceleration of ball C will decrease significantly with time. 266888 A-level Science Physics_CH03.indd 69 9/21/19 11:29 AM 3 Dynamics 17. Sketch a velocity-time graph for a small object falling through two different viscous fluids. Use a solid line for the more viscous fluid and a dashed line for the less viscous fluid. 18. A crate of relief supplies is attached to a parachute and dropped from a plane. Sketch an acceleration-time graph for the crate falling to the ground. Experimental skills: Investigating air resistance Apparatus You will need • to make a scale that is 2.00 m tall to be fixed to a wall. You can do this with a measuring tape or pieces of paper that are accurately marked. The intervals should be 0.10 m apart. The scale needs to be vertical in both planes (front to back and side to side) • a table tennis (ping pong) • a balance to measure the mass of the table tennis ball • a timer or stopwatch • a video camera that will allow frame-by-frame replay. Questions P1. (a) Explain whether the displacements should be measured from the top, middle or bottom of the ball. (b) Explain how a suitable position for the camera should be decided. (c) Explain why repeats are carried out and an average calculated. P2. The average results from a pair of students is shown in Table 3.1. (a) Use the students’ results to plot a displacement-time graph of the falling table tennis ball. (b) Draw a smooth best-fit curve through the points. (c) Describe the relationship shown on the graph. (d) Use your graph to estimate the terminal velocity of the table tennis ball. In this investigation you will investigate air resistance by attempting to estimate the terminal velocity of a table tennis ball falling vertically. A table tennis ball is has a small mass, so when the ball is dropped the increasing air resistance quickly becomes significant compared with the ball’s weight. Techniques Place the scale vertically with 0 at the top, as the scale will be used to measure displacement of the ball as it falls and not it height. Position the timer or stopwatch close to the scale so that it can be seen when a video is made of the drop. Place the camera in a suitable position so that the entire 2.00 m drop can be recorded. Start the timer, start the video camera, drop the table tennis ball from a height of 2.00 m in front of the scale. Do this three times so that an average displacement can be calculated for each time interval. Replay the video and record the displacement of the table tennis ball at regular time intervals, such as 0.10 s. average displacement / m You may recall the shapes of displacement-time graphs for objects that are accelerating and for objects that are at constant velocity from Chapter 2. time / s Table 3.1 266888 A-level Science Physics_CH03.indd 70 9/21/19 11:29 AM 3.4 Motion with resistive forces P3. Use the mass of the table tennis ball to calculate the force of air resistance when the ball is at terminal velocity. P4. Sketch a graph to show how air resistance varies with time for a falling table tennis ball. Label any key values on your sketch graph. P5. (a) State two main sources of uncertainty in this experiment. (b) Suggest two improvements to the procedure. Air resistance acts on all objects moving in air. It contributes to the resultant resistive force on a moving vehicle. Air resistance increases with the speed of the vehicle relative to the surrounding air; it is proportional to the square of the relative speed. This is why it requires more driving force to ride a bicycle in a direction opposite to the wind than it does to ride in conditions with no wind. Assuming the wind is constant, air resistance increases with the speed of the vehicle, so the resultant resistive force on the vehicle will also increase with speed. For a given driving force, for example from an engine, the acceleration will become zero when the resistive forces increase to become equal to the driving force. The vehicle then has constant velocity. It is in translational equilibrium because the resultant force will be zero, as shown in Figure 3.3. Any moving vehicle will therefore have a maximum theoretical speed that will be reached when the resistive forces become equal to the maximum driving force. Air resistance is reduced by streamlining an object so that its area perpendicular to motion is minimised. The car shown in Figure 3.18 was designed to achieve the lowest possible resistive force and, at the same time, the highest possible driving force. This enabled it to become the fastest vehicle to travel on land at 1228 km h–1, which is faster than the speed of sound in air. Horizontal motion with air resistance You will learn about relative speed in Topic 3.6. 19. A large cargo ship, initially at rest, produces a constant driving force. The driving force causes the ship to accelerate. Explain why the ship reaches a constant velocity although the driving force remains the same. 20. The Thrust SSC vehicle shown in Figure 3.18 broke the world record for speed on land. The rules for the world record state that a vehicle must travel at the record-breaking speed for a distance of 1.6 km. This is called a pass. The same vehicle must make another pass in the opposite direction within 1 hour of the first pass. Both passes are on horizontal ground. Explain why the vehicle must make two passes in opposite directions to qualify for a world speed record. Figure 3.18 Thrust SSC became the fastest vehicle to travel on land in 1997 partly due to its design to reduce air resistance. Projectile motion with air resistance A projectile is an object moving in air with no driving force. In Chapter 2, we stated that the horizontal component of the velocity of a projectile would remain constant, but this is only when air resistance is neglected. A free body diagram of a projectile that is subject to air resistance will show only two forces: its weight that acts vertically downwards and the air resistance that acts in the opposite direction to the velocity of the projectile at that instant. Link It may be helpful to look back at Chapter 2, Topic 2.4, for descriptions of projectile motion. 266888 A-level Science Physics_CH03.indd 71 9/21/19 11:29 AM 3 Dynamics velocity of projectile at this instant air resistance weight, mg Figure 3.19 In practice, a projectile is acted upon by its weight and by air resistance. Air resistance acts in a direction opposite to the velocity. Vertical height No air resistance You saw in Topic 2.4 that the path taken by a projectile is a curve. This curve will be a parabola when there is no air resistance. As air resistance always acts in a direction opposite to the velocity of the projectile, this force will reduce both the height and the range of the projectile. The path of a projectile with and without air resistance is shown in Figure 3.20. Significant air resistance Horizontal distance Figure 3.20 The path of projectile motion with and without air resistance. In Figure 3.20, notice how the path of the projectile is symmetrical about the highest point without air resistance. This is because the only acceleration is due to gravity and is in the vertical direction. The horizontal component of the velocity remains constant. In contrast, the path of the projectile with significant air resistance is not symmetrical. This is because the horizontal component of the velocity is decreasing with time in the air. 21. The diagram in Figure 3.21 shows how a projectile would move in the absence of air resistance. Figure 3.21 Copy the diagram and sketch the path of the same projectile in the presence of air resistance. 22. A projectile is launched horizontally from the edge of a cliff. Air resistance is not negligible. When the projectile hits the ground it still has a horizontal component to its velocity. Sketch a graph to show the variation of (a) the horizontal component of the velocity of the projectile with time (b) the vertical component of the velocity of the projectile with time. 266888 A-level Science Physics_CH03.indd 72 9/21/19 11:29 AM 3.5 Conservation of momentum ➜➜Solid objects moving on solid surfaces experience friction that opposes their motion, reducing their acceleration. ➜➜Solid objects moving through gases or liquids experience a resistive force called the drag force. ➜➜Drag force is due partly to the viscosity of the fluid. ➜➜Drag force increases with the speed of the object and the area of the object and also depends on the shape of the object. ➜➜Air resistance is the drag force for an object moving through air. ➜➜Air resistance reduces the resultant downward force on a falling object and hence its acceleration. ➜➜The air resistance increases as the falling object’s speed increases. When the air resistance becomes equal to the object’s weight there is no resultant force so the object falls at a constant speed called its terminal velocity. ➜➜Air resistance is one of the resistive forces on a vehicle. It acts against the driving force and increases with the vehicle’s speed. ➜➜If the total resistive forces on a vehicle become equal to the driving force, the velocity becomes constant. ➜➜For a projectile, air resistance causes the height and range to be reduced. Key ideas 3.5 Conservation of momentum Link Refer back to Topic 3.2 for how to calculate an object’s momentum. You may recall from your previous courses that momentum is always conserved in interactions such as collisions. In Topic 3.3 we looked at interactions between objects in terms of forces. When objects interact, we can also predict what will happen after the interaction using the concept of momentum. In a game of snooker, a wooden cue is pushed towards a ball to strike it rapidly. This ball then rolls away from the cue and collides with another ball. Figure 3.22 A cue is being used to strike the white ball. The white ball will roll away from the cue and collide with the red ball. What will happen next? The white ball in Figure 3.22 has the same mass as the red ball. If the white ball hits the red ball directly in a straight line, then the white ball will stop and the red ball will start to move at the same speed that the white one was moving before the collision. 73 266888 A-level Science Physics_CH03.indd 73 9/21/19 11:29 AM 3 Dynamics This demonstrates the conservation of momentum. The principle of conservation of momentum states that, if no external force acts on a system of objects, then the momentum of the system will remain unchanged. Link Refer back to Topic 3.2 to remind yourself of Newton’s second law. Refer back to Topic 3.3 to remind yourself of Newton’s third law. ‘No external force acts’ means that there is no resultant force on the objects immediately before their interaction. This means that they are both moving in a straight line with uniform velocity immediately before the interaction. Conservation of momentum follows directly from Newton’s third law. Imagine two balls A and B colliding head-on. From Newton’s third law, the force FA on A by B is equal and opposite to the force FB on B by A. FA = –FB Refer back to Topic 3.3 to remind yourself of Newton’s third law. From Newton’s second law, the force on an object is equal to its rate of change of momentum. So ∆pA ∆p =− B ∆t ∆t The collision duration Δt is the same for both, so ΔpA = − ΔpB (pafter − pbefore)A = −(pafter − pbefore)B (pafter)A + (pafter)B = (pbefore)A + (pbefore)B In words, total momentum before collision = total momentum after collision Conservation of momentum in one dimension Motion or interactions ‘in one dimension’ means that the objects move and interact only in one straight line. Momentum is always conserved when objects collide or separate from each other, provided no force acts except the forces of these objects on one another. In the example of snooker balls colliding, we can represent the interaction with the simple diagram shown in Figure 3.23. v = 0.5 m s–1 v = 0.5 m s–1 Figure 3.23 Before and after a collision between two snooker balls In Figure 3.234, the white ball initially has a momentum, p, of mv = 0.16 kg × 0.5 m s–1 = 0.08 kg m s–1. As momentum is conserved, then the momentum before the collision will be equal to the momentum after the collision. The momentum of the red ball after the collision is therefore also 0.08 kg m s–1, so its p = 0.08 = 0.5 m s–1. velocity, v, is m 0.16 Worked example A ball of mass 0.9 kg rolls with a speed of 3.5 m s–1 towards another ball of mass 2.2 kg which is initially at rest. After they collide, the 2.2 kg ball rolls with a speed of 1.2 m s–1 in the same direction as the 0.9 kg ball was rolling. Calculate the speed and direction of the 0.9 kg ball after they collide. 266888 A-level Science Physics_CH03.indd 74 9/21/19 11:29 AM 3.5 Conservation of momentum Answer v1b = 3.5 m s–1 v2b = 0 0.9 kg 2.2 kg Drawing a simple diagram to show before and after the interaction will help you to structure your calculation and help to avoid confusion between positive and negative directions. Figure 3.24 total momentum before collision = total momentum after collision We can call the balls 1 and 2, and use b for before and a for after. 0.51kg m s −1 = 0.57 m s–1 0.9 kg m1v1b + m2v2b = m1v1a + m2v2a (0.9 kg × 3.5 m s–1) + (2.2 kg × 0) = (0.9 kg × v) + (2.2 kg × 1.2 m s–1) 3.15 kg m s–1 = 0.9v + 2.64 kg m s–1 As this value is positive, we know the 0.9 kg ball moves in the same direction as the 2.2 kg ball. 23. A trolley, P, of mass 0.87 kg travelling at 0.56 m s–1 collides with a stationary trolley, R, of mass 0.78 kg. After the collision, both trolleys move in the same straight line. Trolley P travels at 0.22 m s–1 after the collision. Calculate the speed of trolley R after the collision. Worked example A railway wagon of mass 8.1 × 104 kg travelling at a speed of 2.4 m s–1 collides with a railway wagon of mass 4.5 × 104 kg travelling at 0.75 m s–1 in the opposite direction. They are on a level track. The two wagons join together. Calculate the speed of the two wagons after the collision, and state their direction. Take care with signs. Remember that velocity and momentum are vector quantities, so opposite signs mean opposite directions in one-dimensional problems. 0.75 m s–1 2.4 m s–1 Answer Draw a diagram like the one in Figure 3.25. Figure 3.25 total momentum before collision = total momentum after collision We can call the wagons 1 and 2, and use b and a for before and after. m1v1b + m2v2b = m1v1a + m2v2a Since the two wagons join together in the collision, v1a = v2a. We will call this v. (8.1 × 104 kg × 2.4 m s–1) + (4.5 × 104 kg × (−0.75 m s–1) ) = (8.1 × 104 kg + 4.5 × 104 kg) × v 1.61 × 105 kg m s–1 = 1.26 × 105 kg × v 1.61 × 105 kg m s −1 = 1.28 m s–1 1.26 × 105 kg As this value is positive, we can say that the combined wagons move in the direction that the 8.1 × 104 kg wagon was initially moving. 266888 A-level Science Physics_CH03.indd 75 9/21/19 11:29 AM 3 Dynamics 24. Two cars collide as shown in Figure 3.26. mass, 1100 kg velocity, 6.4 ms–1 mass, 1200 kg velocity 0 B after 1100 kg 1200 kg Figure 3.26 Car A of mass 1100 kg is travelling at 6.4 m s–1 when it collides with car B. Car B of mass 1200 kg is initially at rest. The two cars become locked together as a result of the collision and move together. Calculate the speed of the two cars together immediately after the collision. 25. A bullet of mass 5 g is fired into a bag of sand. The bag of sand has a mass of 10 kg and is hanging by a rope, free to swing. The bullet enters the bag at a velocity of 300 m s–1 and stops in the sand. Calculate the initial velocity with which the bag is made to move by the bullet. 26. A car of mass 890 kg travelling at 18 m s–1 collides with a stationary car. The two cars become locked together during the impact and move forward with a velocity of 9.7 m s–1. Calculate the mass of the stationary When objects with a large difference in mass collide with each other, the effect of the conservation of momentum can be difficult to detect. Consider a common event. A car of mass 1100 kg is travelling at 35 m s–1 when it collides with a fly of mass 15 mg travelling in the opposite direction at 1.0 m s–1. As a result of the collision, the fly sticks to the front of the car. The momentum of the car before the collision is 1100 kg × 35 m s–1 = 3.85 × 104 kg m s–1 The momentum of the fly before the collision is 1.5 × 10 –5 kg × −1.0 m s–1 = −1.5 × –5 10 kg m s–1 So the total momentum after the collision is 3.85 × 104 kg m s–1 + (−1.5 × 10 –5 kg m s–1) The change in momentum of the car is not detectable working at 3 or even 4 significant figures. Effectively the car continues with the same speed. 27. A concrete block falls from a height of several metres onto soft soil. The block becomes stuck in the soil and does not bounce or break. Explain, with reference to Newton’s laws, whether momentum is conserved in the collision between the block and the Earth. The word explosion is sometimes used in dynamics for objects that are separating. It does not necessarily mean a chemical or nuclear explosion. Explosions Momentum is conserved not only when objects collide, but also when a single or composite object separates into parts that move away from each other. This is why a gun recoils (moves backward) when a bullet is fired and why a hosepipe moves backward when water comes out through a nozzle. Worked example A cannon of mass 100 kg is used to fire a cannonball of mass 5 kg. The cannon is at rest and the cannon ball is fired at a velocity of 80 m s–1. Calculate the recoil velocity of the cannon. 266888 A-level Science Physics_CH03.indd 76 9/21/19 11:29 AM 3.5 Conservation of momentum 400 kg m s −1 so v = −4 m s–1 100 kg −v m s–1 = Answer total momentum before explosion = total momentum after explosion Before the event, the total momentum of the system (cannon plus cannonball) is zero. Remember to add the masses in a combined system like this. We can call the cannon 1 and the cannonball 2, and use b and a for before and after. m1v1b + m2v2b = m1v1a + m2v2a Here v1b = v2b = 0. total momentum before explosion = (100 kg + 5 kg) × 0 m s–1 = 0 kg m s–1 This means that after the event the total momentum must also be zero. Remember that momentum and velocity are vector quantities. This means the forward momentum of the cannonball will be equal to the backward momentum of the cannon. Let the velocity of the cannon after the explosion be −v. The negative velocity indicates an opposite direction to the positive value (the cannonball). 0 = (5 kg × 80 m s–1) + (100 kg × −v m s–1) 0 = 400 kg m s–1 + (100 kg × −v m s–1) 400 kg m s–1 = 100 kg × –v m s–1 28. A spacecraft is made of two parts, A and B. A has mass 800 kg and B has mass 150 kg. Part B is released with a velocity of 10 m s–1 relative to the original spacecraft. Calculate the velocity of part A after the release. 29. A large gun fires an artillery shell of mass 15 kg at a velocity of 680 m s–1. The initial recoil of the gun is 23 m s1. Calculate the mass of the gun. 30. A large fire hose releases water at a rate of 30 kg s–1 and with a speed of 15 m s–1. Calculate the force needed to keep the end of the hose from moving backwards. Conservation of momentum in two dimensions Momentum is a vector quantity, so for situations involving more than one dimension, momentum must be conserved in any direction (provided there is no external force in that direction). It is easiest to consider the momentum in the initial direction of motion of one of the objects, and the momentum in a direction perpendicular to that. For example, think of a moving snooker ball colliding with a stationary one in a line that does not pass through the centre of both balls as shown in Figure 3.27. Motion or interactions in two dimensions means that the objects may move and interact in different directions not in the same straight line. p2 p1 p3 before Figure 3.27 A two dimensional collision between snooker balls viewed from above. The vectors represent the momentum of each ball but are not to scale. You may want to look back at resolving vectors in Chapter 1. In this section, you will be working out the components of momentum in two perpendicular directions. 266888 A-level Science Physics_CH03.indd 77 9/21/19 11:29 AM 3 Dynamics The momentum of the white ball before the collision only has a forward component. There is no component in the direction perpendicular to its initial motion. (Remember, Figure 3.27 is viewed from above.) This means that the perpendicular momentum before the collision is zero. After the collision, the white ball and the red ball each have forward and perpendicular components to their momentum, as shown in Figure 3.28. The vector sum of the forward components must equal the initial forward momentum of the white ball. The vector sum of the perpendicular components must be Figure 3.28 Resolving the momentum of each ball after the collision into components. Worked example In a game of bowls, one player rolls a ball towards their opponent’s ball. The aim is for the balls to collide and so move the opponent’s ball further away from a target. One player rolls a ball, A, at a velocity of 3.2 m s–1 to collide with an opponent’s ball, B, which is initially at rest. Both balls have equal mass of 2.0 kg each. After the collision, ball A travels at an angle of 25o to the left of its initial direction. Its speed is 1.7 m s–1. Immediately after the collision, ball B moves at 40° from the direction of A’s initial motion (Figure 3.29). 1.7 m s–1 3.2 m s–1 B before A 25° 40° B after Figure 3.29 Calculate the speed of ball B immediately after the collision. Answer Total momentum before collision parallel to initial motion = total momentum after collision parallel to initial motion Let the direction of the initial motion of A be along the x-axis. mAvAx + 0 = mAv’Ax + mBv Bx where v’Ax denotes the speed of A in the x-direction after the collision. 266888 A-level Science Physics_CH03.indd 78 9/21/19 11:29 AM 3.5 Conservation of momentum 2.0 kg × 3.2 m s–1 = (2.0 kg × (1.7 cos 25°) m s–1) + (2.0 kg × (v Bx cos 40°) m s–1) 6.4 kg m s–1 = 3.08 kg m s–1 + (2.0 kg × (v Bx cos 40°) m s–1) 6.4 kg m s–1 − 3.08 kg m s–1 = 2.0 kg × (v Bx cos 40°) m s–1 3.32 kg m s–1 = 2.0 kg × (v Bx cos 40°) m s–1 3.32 kg m s −1 = 2.16 m s–1 2.0 kg × cos40 In the y-direction: 0 = mAv’Ay + mBv By where v’Ay denotes the speed of A in the y-direction after the collision. 0 = (2.0 kg × (1.7 sin 25°) m s–1) + (2.0 kg × (−v By sin 40°) m s–1) Notice that v is negative as the y-component of its velocity is opposite to that of ball A. 0 = 1.44 kg m s–1 + (2.0 kg × (−v By sin 40°) m s–1) −1.44 kg m s–1 = 2.0 kg × (−v By sin 40°) m s–1 v Bx = −1.44 kg m s −1 = −v By, so v By = 1.12 m s–1 2.0 kg × sin 40 The speed of the ball, v B , can be found using Pythagoras’ theorem: v B2 = (v Bx2 + v By2) = (2.16 m s–1)2 + (1.12 m s–1)2 = 5.92 m2 s–2 v B = 2.4 m s–1 31. A red ball of mass 0.5 kg travelling at 1.0 m s–1 collides with a stationary green ball of mass 0.5 kg. After the collision, the red ball travels at 0.27 m s–1 in a direction 30° left of its original direction. The green ball travels in a direction 35° to the right of where it was hit. Calculate the speed of the green ball. Worked example Two ice hockey pucks, A and B, are sliding on ice. Puck A, of mass 2.5 kg is travelling at 1.3 m s–1 then it collides with puck B, of mass 2.0 kg travelling at 1.1 m s–1. A and B are travelling in perpendicular directions when they collide and they become locked together as a result of the collision. Calculate the speed and direction of the pucks after collision. Answer Draw a diagram. vA = 1.3 m s–1 mA = 2.5 kg va = 1.1 m s–1 ma = 2.0 kg m = 4.5 kg Figure 3.30 total momentum before collision = total momentum after collision 266888 A-level Science Physics_CH03.indd 79 9/21/19 11:29 AM 3 Dynamics pa = 2.0 kg × 1.1 m s–1 = 2.2 kg m s–1 pa = 2.5 kg × 1.3 m s–1 = 3.25 kg m s–1 q Figure 3.31 the momentum vectors before collision are at right angles 3.92 kg m s −1 p = = 0.87 m s–1 m 4.5 kg 2.2 kg m s −1 ,soθ = tan–1 3.25 kg m s −1 2.2 kgms−1 3.25 kgms−1 = 34° tan θ = (total momentum after collision)2 = (2.2 kg m s–1)2 + (3.25 kg m s–1)2 total momentum after collision = 3.92 kg m s–1 Worked example 32. A car of mass 950 kg is travelling on a straight road at a uniform speed of 15 m s–1. The car is approaching a road junction. A truck of mass 2.4 × 103 kg travelling at 12 m s–1 has a direction perpendicular to that of the car when the two vehicles collide. The car and the truck become locked together as a result of the collision. Calculate the speed and direction of the two vehicles immediately after the collision. Give the direction relative to the direction of the car before collision. Two tennis balls have equal mass. Ball A, travelling at 2.4 m s–1 collides with ball B, initially at rest. After the collision, ball A moves in a direction 30° away from its original direction and ball B moves in a direction 40° from the initial direction of ball A as shown in Figure 3.32. A 35° 40° original direction of ball A B Figure 3.33 Motion of balls A and B after the collision Calculate the speeds of ball A and ball B after the collision. Answer total momentum before collision in original direction of ball A = total momentum after collision in original direction of ball A m × 2.4 m s–1 = mvA cos 30° + mv B cos 40° where vA and v B denote the speeds of A and B after the collision. 2.4 m s–1 = vA cos 30° + v B cos 40° 2.4 m s–1 = 0.866vA + 0.766v B [1] 266888 A-level Science Physics_CH03.indd 80 9/21/19 11:29 AM 3.6 Momentum and kinetic energy in interactions total momentum before collision in direction perpendicular to that of ball A = total momentum after collision in direction perpendicular to that of ball A 0 = mvA sin 30° − mv B sin 40° vA sin 30° = v B sin 40° 0.5vA = 0.643v B [2] We have two variables, vA and v B and two equations, so we can solve them using simultaneous equations. solving for vA we need to eliminate v B from equation 2: v B = 0.5v A 0.643 substituting this for v B in equation 1: 2.4 m s–1 = 0.866vA + 0.766 × 0.5v A 0.643 2.4 m s–1 = 0.866vA + 0.596vA 2.4 m s–1 = 1.462vA vA = 1.64 m s–1 This value of vA can now be substituted into either equation 1 or equation 2 to find v B. substituting in equation 2: 0.5 × 1.64 m s–1 = 0.643v B v B = 1.28 m s–1 33. Two balls, P and R, of equal mass collide. Ball P is initially travelling at 3.9 m s–1 and ball R is initially stationary. After the collision, ball P travels at an angle of 25° to its initial direction, and ball R travels at an angle of 30° to the initial direction of P. Calculate the speeds of both balls P and R after the collision. Key ideas ➜➜The principle of conservation of momentum: if no external force acts on a system of interacting objects, then the momentum of the system will remain unchanged. ➜➜Momentum is always conserved when objects collide or separate from each other, provided no external force acts. ➜➜In two dimensions, momentum is conserved in any one direction. ➜➜Momentum problems in two directions can be solved by resolving known values of momentum into two perpendicular components. Momentum in each of these directions must be conserved. 3.6 Momentum and kinetic energy in interactions In Topic 3.5 you saw that momentum is always conserved when objects collide or separate. Will kinetic energy always be conserved in such interactions? Kinetic energy, Ek is energy of movement and is calculated using Ek = kinetic energy (J) = × mass (kg) × speed2 (m s–1)2 81 266888 A-level Science Physics_CH03.indd 81 9/21/19 11:29 AM 3 Dynamics Elastic and inelastic interactions Consider the two balls in the game of snooker from Topic 3.5. Snooker balls have a mass of 0.16 kg. The rolling white ball has a velocity of 1.0 m s–1 and collides in a straight line with a stationary red ball of the same mass. We can use the subscript letters w and r for the white and red ball, and b and a for before and after. Total momentum before the collision = mwvwb + mrvrb Total momentum before the collision = (0.16 kg × 1.0 m s–1) + (0.16 kg × 0) = 0.16 kg m s–1. After the collision, the first rolling ball comes to rest and the other ball rolls away in a straight line. Momentum is always conserved, so the ball must roll away with a velocity of 1.0 m s–1 to give a total momentum after the collision of 0.16 kg m s–1. The total kinetic energy before the collision = ( 12 mv2)wb + ( 12 mv2)rb = ( 12 × 0.16 kg × (1.0 m s–1)2) + ( 12 × 0.16 kg × 02) = 0.08 J. The total kinetic energy after the collision = ( 12 mv2)wa + ( 12 mv2)ra = ( 12 × 0.16 kg × 02) + ( 12 × 0.16 kg × (1.0 m s–1)2) = 0.08 J. Here, kinetic energy has been conserved. When kinetic energy is conserved in an interaction, the event is described as an elastic interaction. In practice, interactions are rarely perfectly elastic, as some of the initial kinetic energy will be transferred to another form. In Chapter 5 you will learn how to derive the equation that relates kinetic energy to mass and velocity. You will also learn more about the principle of conservation of energy. You may also recall from previous courses that the total quantity of energy in a system is always conserved in any event (the principle of conservation of energy). The total kinetic energy of objects that interact may or may not be conserved. If kinetic energy is not conserved in an interaction, then some of this energy must be transferred to other forms such as thermal or elastic potential. Worked example Consider two railway wagons. One wagon of mass 1900 kg is travelling at 3 m s–1 on a level track and collides with another stationary wagon of mass 2200 kg. They join and continue to move together. Is this collision elastic? Answer First, use the principle of conservation of momentum to calculate the velocity with which the conjoined wagons move after the collision. total momentum before collision = mv = (1900 kg × 3 m s–1) + (2200 kg × 0) = 5700 kg m s–1 total momentum after collision = 5700 kg m s–1 velocity after collision, v = Tip The momentum will always be conserved in any interaction, even when kinetic energy is not conserved. 5700 kg ms −1 p = = 1.4 m s–1 m (1900 + 2200 ) kg Next, compare the total kinetic energy of the system before and after the collision. total kinetic energy before collision = 12 mv2 = ( 12 × 1900 kg × (3 m s–1)2) + ( 12 × 2200 kg × 02) = 8550 J. total kinetic energy after collision = 12 mv2 = 12 × (1900 + 2200) kg × (1.4 m s–1)2) = 4020 J. Kinetic energy has not been conserved, so the collision is not elastic. When kinetic energy is not conserved in an interaction, the event is described as an inelastic interaction. 34. A trolley of mass 0.95 kg travelling at 1.4 m s–1 collides with an identical trolley which is initially at rest. The two trolleys become locked together as a result of the collision. 266888 A-level Science Physics_CH03.indd 82 9/21/19 11:29 AM 3.6 Momentum and kinetic energy in interactions (a) Calculate the speed of the trolleys after collision. (b) Show by calculation whether the collision is elastic or inelastic. 35. An air track is a piece of equipment used to produce an almost frictionless surface. Vehicles called gliders slide over the surface of the air track and can be used to study collisions. Two gliders are placed on an air track. Each has a bar magnet attached. The north poles of the magnets are facing each other as shown in Figure 3.34. light gates magnet glider A glider B air track Figure 3.33 path of alpha particle Glider A has a total mass of 200 g including the magnet. Glider B has a total mass of 100 g including the magnet. Glider A is gently pushed towards glider B. The velocity of glider A is 5 cm s–1 immediately before reaching glider B, which is initially at rest. The two gliders do not make contact, as the magnets repel each other. After the interaction, glider A has a velocity of 1.7 cm s–1. (a) Calculate the velocity of glider B after the interaction. (b) Show by calculation whether this interaction is elastic or inelastic. 36. An alpha particle of mass 6.68 × 10 –27 kg has a velocity of 1.42 × 107 m s–1. The alpha particle approaches a stationary helium atom. The alpha particle and the nucleus of the helium atom repel each other with electrostatic forces, because they are both positively charged, but they do not make contact. The mass of the helium atom is equal to that of the alpha particle. The alpha particle and the helium atom move in different directions after the interaction as shown in Figure 3.34. path of helium atom Figure 3.34 The speed of the alpha particle after the interaction is 1.01 × 107 m s–1. The speed of the helium atom after the interaction is 9.98 × 106 m s–1. Show that this interaction is elastic. 37. A plastic disc, A, of mass 165 g slides across a frictionless surface with a speed of 1.30 m s–1. The disc collides elastically with another disc, B, of equal mass which is initially at rest. The two discs then move in different directions as shown in Figure 3.35. 266888 A-level Science Physics_CH03.indd 83 9/21/19 11:29 AM 3 Dynamics 0.760 m s–1 A 1.30 m s–1 A B B before collision after collision Figure 3.35 Calculate the speed of disc B after the collision. 38. A proton of mass 1.67 × 10 –27 kg with a uniform velocity of 2.0 × 104 m s–1 approaches a massive positively charged stationary object, O. The proton is repelled in the opposite direction from P without contacting O. The interaction is elastic. (a) Show that the speed of O after the interaction is negligible. (b) Calculate the change in momentum of the proton. 39. Particles in a gas collide with each other and with the walls of the container. These collisions are almost perfectly elastic. Explain what would happen to a gas in a sealed container if these collisions were inelastic. Relative speed of approach and separation Relative speed means the speed of one object measured in comparison to another. Imagine you are in a vehicle like a car or train. You are travelling in a straight line and another vehicle passes you in the opposite direction. It appears to be going much faster than your vehicle. This is because of relative speed. For two objects travelling in opposite directions, their relative speed is the sum of their speeds. Now imagine your vehicle passes another vehicle travelling in the same direction. You appear to pass it quite slowly. This is again due to relative speed. For two objects travelling in the same direction, their relative speed is the difference between their speeds. The numerical value of relative speed is always positive, so when finding the difference, take the smaller speed away from the larger one. For two objects, one of which is not moving and the other is moving straight towards it, their relative speed is equal to the speed of the moving one. These situations are shown by the diagrams in Figure 3.36. VA A VA A VB = 0 relative speed = VA VA = 0 A relative speed = VA – VB VA A relative speed = VA + VB relative speed = VB Figure 3.36 Determining relative speed of two objects 266888 A-level Science Physics_CH03.indd 84 9/21/19 11:29 AM 3.6 Momentum and kinetic energy in interactions 40. A train is travelling at a constant velocity of 52 m s–1 going north. It passes another train travelling at a constant velocity of 35 m s–1 going south. What is the relative speed of the two trains? B 35 m s–1 C 52 m s–1 D 87 m s–1 A17 m s–1 41. A car is travelling at a constant speed of 76 km h–1 when it overtakes a truck travelling at 51 km h–1 in the same direction. What is the relative speed of the two vehicles? B 51 km h–1 C 76 km h–1 D 127 km h–1 A25 km h–1 3.0 m s–1 3.0 m s–1 before collision 3.0 m s–1 Consider two balls of equal mass rolling towards one another, each with a speed of 3.0 m s–1 (Figure 3.37). Before collision the balls have a relative speed of 3.0 m s–1 + 3.0 m s–1 = 6.0 m s–1. We call this the relative speed of approach. Before collision the total momentum is zero. For zero momentum afterwards, the velocity of A and B must be equal and opposite − they bounce apart. If the collision is elastic, the total kinetic energy afterwards is equal to that before, so the balls must both have speed 3.0 m s–1. After collision their speed relative to each other is also 3.0 m s–1 + 3.0 m s–1 = 6.0 m s–1. We call this the relative speed of separation. A after collision Figure 3.37 3.0 m s–1 In general, for masses of any size, in a perfectly elastic collision: relative speed of approach = relative speed of separation 42. Two bowling balls, P and Q, of equal mass are rolling in the same direction in the same straight line. P is behind Q and going 1.2 m s–1 faster than Q. P collides with Q from behind. Q now goes 1.2 m s–1 faster than P. No external forces act during the collision (Figure 3.38). Figure 3.38 after collision v m s–1 v m s–1 (v + 1.2) m s–1 (v + 1.2) m s–1 before collision Explain whether: (a) momentum has been conserved in the collision (b) t he collision is elastic or inelastic. 43. Use calculations to show whether these collisions are elastic or inelastic. (a) A ball of mass 5 kg travelling at 10 m s–1 collides with a stationary ball of mass 3 kg. After the collision, the 5 kg ball has a speed of 4 m s–1. Both balls move in the same straight line. (b) A truck of mass 3500 kg travelling at a speed of 20 m s–1 collides into the back of a car of mass 900 kg. The car is travelling at 15 m s–1 in the same direction as the truck at the time of collision. Both vehicles become joined together and move forward in the same straight line. Explosions Will the separation of a composite object into parts be elastic or inelastic? The relative speed of the parts of a composite object before separation will always be zero, and their relative speed of separation will always be greater than zero. Therefore, the separation of objects must always be inelastic. We can show this by calculating the kinetic energy. Tip Comparing relative speed before and after an interaction in order to show whether the interaction is elastic or inelastic can be more straightforward than calculating kinetic energy before and after. 85 266888 A-level Science Physics_CH03.indd 85 9/21/19 11:29 AM 3 Dynamics Worked example A gun of mass 3 kg fires a 5 g bullet with a velocity of 400 m s–1. Show using kinetic energy that the interaction between the gun and the bullet is inelastic. Answer momentum before firing = mgvgb + mbv bb = (3 kg × 0 m s–1) + (0.005 kg × 0 m s–1) = 0 momentum of bullet leaving the gun = mbv ba = 5 × 10 –3 kg × 400 m s–1 = 2 kg m s–1 momentum of gun immediately after firing = −2 kg m s–1 total kinetic energy before firing = 0 In Chapter 5, you will learn more about the different forms of energy and how they can be transferred. total kinetic energy after firing = ( 12 mv2)b + ( 12 mv2)g = ( 12 × 5 × 10 –3 kg × (400 m s–1)2) + ( 12 × 3 kg × (0.7 m s–1)2) = 400 J. So this separation is inelastic. 2 kg m s −1 momentum of gun after = = 0.7 m s–1. mg 3 kg velocity of gun = In a separation, or explosion, the energy which is transferred to the moving parts comes from the energy that was stored to cause the separation or explosion. For example, if two stationary trolleys are pushed apart by a compressed spring, then the kinetic energy transferred to the trolleys has come from the elastic potential energy in the compressed spring. 44. A cannon of mass 650 kg fires a ball of mass 25 kg at 32 m s–1. (a) Calculate the recoil speed of the cannon. (b) Show, using kinetic energy, whether the interaction between the cannon and the ball is elastic or inelastic. Key ideas ➜➜In an elastic collision, momentum and kinetic energy are both conserved. ➜➜In an inelastic collision, momentum is conserved but kinetic energy is not conserved. Some energy is transferred to other forms. ➜➜For a perfectly elastic collision, the relative speed of approach is equal to the relative speed of separation. ➜➜Momentum is also conserved in the separation of a composite object into parts. Such a separation is always inelastic. ASSIGNMENT 000: Collisions in space Background Scientists have known for many years that collisions occur in space. There are large craters visible on the Moon and also on Earth, some of which must have been caused by collisions. Relatively small objects in space, like asteroids and comets, can become caught in the gravitational field of a much larger object, like a planet. This can result in the smaller object starting to orbit the larger one. Or this can result in a collision. Until 1994, scientists had never been able to watch a collision between a space object and a planet. In 1993 they had seen that a comet was travelling on a path that would take it very close to the planet Jupiter. They thought there was a very high chance of a collision, and observed the events that followed very closely. 266888 A-level Science Physics_CH03.indd 86 9/21/19 11:29 AM 3.6 Momentum and kinetic energy in interactions Figure 3.39 One of the fragments of comet Shoemaker-Levy 9 collides with Jupiter. In 1993, astronomers Carolyn and Eugene Shoemaker working with David Levy, discovered a comet. This comet was named after them as Shoemaker-Levy 9 or just SL9. Most of the known comets in the Solar System orbit the Sun, but this comet was in orbit around the planet Jupiter. A1. Comet SL9 was in orbit around Jupiter long before 1993. (a) Describe the nature of the forces which were acting to cause this orbit. (b) Describe the magnitude and direction of these forces. (c) Explain using Newton’s first law, how a comet can follow a curved path around a larger object like a planet or the Sun. It was estimated that SL9 had been in orbit around Jupiter for 20−30 years before its discovery. In the year before it was discovered, calculations showed that SL9 had passed around 40 000 km from the upper atmosphere of Jupiter. When a smaller object, like a comet, approaches another larger one, like a planet, the force due to gravity from the planet can cause the comet to break apart (Figure p F Figure 3.40 Gravitational forces from a planet can cause a comet to break apart. Not to scale. Consider a particle P which is part of the comet in Figure 3.40. If P is close to the surface of the comet, then the force, F, due to gravity from the planet will be much greater than the force, f, due gravity from the comet itself. If he difference between these forces is large enough, the comet will break apart. A2. Newton’s third law predicts that there will be a pair of equal and opposite forces acting when one object exerts a force on another. This can be called an N3 pair. Describe the force that makes up the N3 pair in Figure 3.40 for: (a) force F (b)force f. 266888 A-level Science Physics_CH03.indd 87 9/21/19 11:29 AM 3 Dynamics This close approach of SL9 to Jupiter caused the comet to break up into 23 fragments that could be seen from Earth. Scientists did not see this happen, but calculated that it happened in 1992. A3. Assume that SL9 broke apart into all 23 parts in one instant. (a) Explain whether the event of the comet breaking apart was elastic or inelastic. (b) Explain why all 23 fragments continued to travel in approximately the same direction as each other. A4. Which property of the SL9 fragments could scientists on Earth observe directly? A their momentum B their velocity C their kinetic energy D their mass One of the fragments had a mass of 1013 kg and was travelling at 60 km s–1 on impact with Jupiter. A5 (a) Calculate the velocity of this fragment in m s–1. (b) Calculate: (i) the momentum of this fragment (ii) the kinetic energy of this fragment. This fragment collided with the atmosphere of Jupiter at an angle of 45° to the top of the atmosphere. A6 (a) Draw a vector diagram to represent the momentum just before impact of this fragment. (b) Calculate the component of this fragment’s momentum towards the centre of Jupiter. When the fragments of SL9 impacted with Jupiter, each of them broke apart into many smaller particles. Careful observations from Earth indicated that some of these particles that moved across the top of Jupiter’s atmosphere had a constant deceleration. A7. Suggest what can be concluded about the force on these particles that caused them to have constant deceleration. Jupiter has the strongest gravitational field of all the planets in the Solar System. Some scientists state that Jupiter reduces the occurrence of impacts of objects like comets with Earth. A8. Suggest an explanation for this statement. Chapter overview Newton’s first law Forces and fields mass resultant force resistive forces acceleration of free fall acceleration Newton’s second law rate of change of momentum Testing predictions against evidence momentum principle of conservation of momentum Newton’s third law Models of physical systems elastic and inelastic interactions kinetic energy 266888 A-level Science Physics_CH03.indd 88 9/21/19 11:29 AM Chapter review Learning outcomes • understand that mass is a property of an object that resists movement or change in motion • recall and be able to use the equation F = ma and understand that acceleration and the resultant force causing it are always in the same direction • be able to define linear momentum as the product of mass and velocity, p = mv, and use this equation to perform calculations on momentum • be able to define force as the rate of change of an object’s momentum • be able to state Newton’s first, second and third laws of motion and apply them to given situations • describe and understand the concept of weight as the effect of a gravitational field on a mass and recall that the weight of a body is equal to the product of its mass and the acceleration of free fall, W = mg • know about frictional forces and viscous/drag forces including air resistance • know that drag force increases with the speed of an object • describe the motion of objects in a uniform gravitational field with air resistance • understand that objects moving against a resistive force may reach a terminal (constant) velocity • state the principle of conservation of momentum • be able to apply the principle of conservation of momentum to solve problems involving objects in both one and two dimensions • know the difference between elastic and inelastic interactions • understand that, while momentum of a system is always conserved in interactions between bodies, a change in kinetic energy takes place if the collision is inelastic • recall that, for a perfectly elastic collision, the relative speed of approach of the interacting objects is equal to the relative speed of separation of the objects Chapter review 1. (a) State Newton’s first law of motion. (b) State whether a resultant force acts on a car when it is: (i) parked and not moving (ii) accelerating rapidly (iii) travelling at a constant high speed in a straight line (iv) travelling at a constant slow speed around a corner. 2. A bowling ball has a mass of 6.8 kg. A soccer ball has a mass of 0.55 kg. Both balls have approximately the same diameters. When both balls are rolling together at the same speed, explain which ball is easier to stop. 3. (a) State Newton’s second law of motion. (b) Write the two equations that can be used to summarise Newton’s second law. (c) The SI derived unit of force is the newton, N. Use Newton’s second law to express the newton in terms of SI base units. (d) Calculate the acceleration produced when a resultant force of 650 N acts on a 2.5 kg mass. 4. (a) Define momentum. (b) Calculate the momentum of each of these objects. Give your answers in standard form. (i) a 12 mg housefly travelling at 3 m s–1 (ii) a 5.5 × 107 kg cruise ship travelling at 12 m s–1 5. An artificial satellite passes through a cloud of cosmic dust particles. Each cosmic dust particle has a mass of 1 pg. The particles collide with the satellite at 70 km s–1 at a rate of 106 per second and are each brought to a stop in 1 µs. Calculate the average force exerted by the dust particles on the satellite. 89 266888 A-level Science Physics_CH03.indd 89 9/21/19 11:29 AM 3 Dynamics 6. A model railway wagon of mass 120 g is travelling at 0.11 m s–1 when it collides with another stationary model wagon of mass 150 g. The two carriages become joined together. (a) Calculate the speed of the two carriages immediately after collision. (b) Determine whether the collision is elastic or inelastic. 7. Air track rider P, travelling at 4.0 cm s–1 collides with rider Q, travelling at 2.5 cm s–1 in the same forwards direction. Both riders have the same mass. Which of these could describe their velocities after an elastic collision? A A forwards at 2.0 cm s–1 and B forwards at 6.5 cm s–1. B A backwards at 1.0 cm s–1 and B forwards at 5.5 cm s–1. C A forwards at 2.5 cm s–1 and B forwards at 4.0 cm s–1. D A backwards at 0.5 cm s–1 and B forwards at 6.0 cm s–1. 8. A skydiver of mass 70.0 kg jumps out of a stationary hot air balloon and reaches a terminal velocity of 55.0 m s–1. The skydiver then opens their parachute and slows to a new terminal velocity of 6.00 m s–1. Describe how the force of air resistance changes during the fall. Calculate the force of air resistance where possible. 266888 A-level Science Physics_CH03.indd 90 9/21/19 11:29 AM
{"url":"https://issuu.com/collinsed/docs/collins_a_level_physics_sb_sample_pages","timestamp":"2024-11-11T10:33:58Z","content_type":"text/html","content_length":"290716","record_id":"<urn:uuid:cd23bf36-1ee7-4ede-a609-1659f7862641>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00598.warc.gz"}
Generate Secant Number Sequence - TOOLYATRI.COM Welcome to the "Generate Secant Number Sequence"! This tool allows you to generate the Secant number sequence up to a specified limit. Secant numbers are a sequence of integers that arise in various mathematical contexts, including trigonometry, combinatorics, and number theory. Steps to use the tool: 1. Enter the desired limit in the input field provided. 2. Click on the "Generate Secant Numbers" button. 3. The tool will compute the Secant numbers up to the specified limit and display them in the output textarea. Functionality of the tool: The tool utilizes a JavaScript function called generateSecantNumbers() to calculate the Secant numbers. It starts with the initial values of 0 and 1 and then iterates to generate subsequent Secant numbers by summing the previous two numbers in the sequence. Benefits of using this tool: • Efficiency: Quickly generate the Secant number sequence without manual computation, saving time and effort. • Accuracy: The tool accurately computes the Secant numbers based on the specified limit. • Flexibility: Users can specify the desired limit, allowing for the generation of Secant numbers within a specific range. 1. What are Secant numbers? □ Secant numbers form a sequence of integers where each number is the sum of the two preceding numbers. The sequence typically starts with 0 and 1, similar to the Fibonacci sequence. 2. Where do Secant numbers appear in mathematics? □ Secant numbers have applications in various mathematical fields, including trigonometry, where they represent the secant function's values at certain angles. They also appear in combinatorics, number theory, and other areas of mathematics. 3. Can Secant numbers be negative? □ While Secant numbers can be negative, the sequence generated by this tool is limited to non-negative integers. However, in certain contexts, such as when dealing with trigonometric functions, Secant numbers can indeed be negative. 4. Are there any interesting properties of Secant numbers? □ Yes, Secant numbers exhibit various interesting properties, including recurrence relations, connections to other number sequences like Fibonacci numbers, and relationships with trigonometric 5. How can I apply Secant numbers in practical problems? □ Secant numbers can be useful in solving problems related to sequences, series, recurrence relations, and mathematical modeling. They can also provide insights into the behavior of certain functions and phenomena in real-world scenarios.
{"url":"https://toolyatri.com/generate-secant-number-sequence/","timestamp":"2024-11-07T06:07:45Z","content_type":"text/html","content_length":"175248","record_id":"<urn:uuid:4183d8c6-4ddb-411d-92d9-88724e1cfecd>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00862.warc.gz"}
Shoshichi Kobayashi (1932 – 2012) Shoshichi Kobayashi, 80, Emeritus Professor of Mathematics at the University of California at Berkeley, died peacefully in his sleep on August 29, 2012. He was on the faculty at Berkeley for 50 years, and has authored over 15 books in the area of differential geometry and the history of mathematics. Shoshichi studied at the University of Tokyo, receiving his B.S. degree in mathematics in 1953. He spent one year of graduate study in Paris and Strasbourg (1953-54) as a recipient of the French Government’s scholarship, and completed his Ph.D. at the University of Washington, Seattle in 1956. He was appointed Member of the Institute for Advanced Study at Princeton (1956-58), Postdoctoral Research Associate at MIT (1958-60), and Assistant Professor at the University of British Columbia (1960-62). In 1962 he joined the faculty at Berkeley and became Full Professor in 1966. He was a visiting professor at numerous departments of mathematics around the world, including the University of Tokyo, the University of Mainz, the University of Bonn, MIT, and the University of Maryland. Most recently he had been visiting Keio University in Tokyo. He was a Sloan Fellow (1964-66), a Guggenheim Fellow (1977-78), and Chairman of his Department (1978-81). Shoshichi Kobayashi was one of the most important contributors to the field of differential geometry in the last half of the twentieth century. His early work, beginning in 1954, concerned the theory of connections, a notion basic to all aspects of differential geometry and its applications. Prof. Kobayashi’s early work was essentially in clarifying and extending many of Élie Cartan’s ideas, particularly those involving projective and conformal geometry, and making them available to modern differential geometers. A second major interest of his was the relation of curvature to topology, in particular on Kähler manifolds. Throughout his career, Prof. Kobayashi continued to focus his attention on Kähler and more general complex manifolds. One of his most enduring contributions was the introduction in 1967 of what soon became known as the “Kobayashi pseudodistance,” along with the related notion of “Kobayashi hyperbolicity.” Since that time, these notions have become indispensable tools for the study of mappings of complex manifolds. Other areas in which Kobayashi made fundamental advances into the twenty-first century include the theory of complex vector bundles, intrinsic distances in affine and projective differential geometry, and the study of the symmetries of geometric structures using filtered Lie algebras. Several of Shoshichi Kobayashi’s books are standard references in differential and complex geometry, among them his two-volume treatise with Katsumi Nomizu entitled “Foundations of Differential Geometry” (1963, 69). Generations of students and other scholars have learned the essentials of the subject from his books. Prof. Toshiki Mabuchi (Osaka University) , in his 2013 expository article contributed to the special issue of the Japanese journal “Mathematical Seminar,” published in February 2013, in honor of Shoshichi Kobayashi, discusses the following six topics attributed to Shoshichi Kobayashi. 1. Kobayashi hyperbolicity, and measure hyperbolicity 2. Projectively invariant distances 3. Study of Frankel’s conjecture and Kobayashi-Ochiai’s characterization of complex projective spaces 4. Filtered Lie algebras and geometric structures 5. Study of Hermitian-Einstein holomorphic vector bundles and Kobayashi-Hitchin correspondence
{"url":"https://www.shoshichikobayashi.com/biography/","timestamp":"2024-11-14T10:15:19Z","content_type":"text/html","content_length":"38236","record_id":"<urn:uuid:4b02dc6a-759c-41ce-83a9-abc08ccd1bad>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00726.warc.gz"}
Centroid of a Trapezoid – Properties and Explanation In this article, students will be able to learn about the topic of the centroid of a trapezoid. We will also look at the centroid of the trapezoid formula. But before we learn how to find the centroid of a trapezoid, students need to focus on the basics and start from the beginning. The first thing that one needs to learn is the definition of a trapezoid. A trapezoid can be defined as a quadrilateral in which there are two parallel sides. A trapezoid is also known as a trapezium. So, if you see trapezium written in some other book, then don’t be confused. It means the same thing as a trapezoid. A trapezoid can also be defined as a four-sided figure that is closed. It also covers some areas and has its perimeter. We will learn the formula for both area and perimeter of a trapezoid at a later point in this article. It should be noted that a trapezoid is a two-dimensional figure and not a three-dimensional figure. The sides that are parallel to one another are known as the bases of the trapezoid. On the other hand, the sides that are not parallel to each other are known as lateral sides or legs. The distance between the two parallel sides is also known as the altitude. Some readers might find it interesting to learn that there is also a disagreement over the exact definition of a trapezoid. There are different schools of mathematics that take up different According to one of those schools of mathematics, a trapezoid can only have one pair of parallel sides. Another school of mathematics dictates that a trapezoid can have more than one pair of parallel This means that if we consider the first school of thought to be true, then a parallelogram is not a trapezoid. But according to the second school of thought, a parallelogram is a trapezoid. There are also different types of trapezoids. And those different types of trapezoids are: A right trapezoid contains a pair of right angles. We have also attached an image of a right trapezoid below. (image will be uploaded soon) In an isosceles trapezoid, the non-parallel sides of the legs of the trapezoid are equal in length. An image depicting an isosceles trapezoid is attached below. (image will be uploaded soon) A scalene trapezoid is a figure in which neither the sides nor the angles of the trapezium are equal. For your better understanding, an image of a scalene trapezoid is attached below. (image will be uploaded soon) The Formula for Area and Perimeter of a Trapezoid Now, let’s look at the formula for calculating the area and perimeter of a trapezoid. According to experts, the area of a trapezoid can be calculated by taking the average of the two bases and multiplying the answer with the value for the altitude. This means that the formula for the area of a trapezoid can also be depicted by: Area = ½(a + b) x h (image will be uploaded soon) Moving on to the formula for the perimeter of a trapezoid, it can be described as the simple sum of all the sides. This means that if a trapezoid has four sides like a, b, c, and d, then the formula for the perimeter of a trapezoid can be represented by: Perimeter = a + b + c + d. The Properties of a Trapezoid There are various important properties of a trapezoid. We have discussed those properties in the list that is mentioned below. • The diagonals and base angles of an isosceles trapezoid are equal in length. • If a median is drawn on a trapezoid, then the median will be parallel to the bases. And the length will also be the average of the length of the bases. • The intersection point of the diagonals is collinear to the midpoints of the two opposite sides. • If there is a trapezoid that has sides, including a, b, c, and d, and diagonals p and q, then the following equation stands true. p^2 + q^2 = c^2 + d^2 + 2ab In the next section, we will look at the centroid of a trapezoid formula. The Formula for Centroid of a Trapezoid In this section, we will look at the trapezoid centroid and the centroid formula for the trapezoid. As you must already know, a trapezoid is a quadrilateral that has two sides parallel. The centroid, as the name indicates, lies at the centre of a trapezoid. This means that for any trapezoid that has parallel sides a and b, the trapezoid centroid formula is: X = {b + 2a / 3 (a + b)} x h In this formula, h is the height of the trapezoid. Also, a and b are the lengths of the parallel sides. (image will be uploaded soon) FAQs on Centroid of a Trapezoid Question 1. Define What You Understand by a Trapezoid. Answer: A trapezoid is a polygon with four sides. There are two parallel and two non-parallel sides in this figure. Trapezoids are also known as trapeziums in some cases. Question 2. What is the Formula for Calculating the Area of a Trapezoid? Answer: The area of a trapezoid can be calculated by taking the average of the two parallel bases and multiplying the answer with the value of the altitude or the distance that exists between the two parallel sides. The formula for calculating the area of a trapezoid is: Area = ½(a + b) x h. Question 3. Can You State that a Trapezoid is a Quadrilateral? Answer: Yes, a trapezoid can be classified as a quadrilateral as it has four sides. Two sides are parallel, and the remaining two sides are not parallel. Question 4. Mention any Three Attributes of a Trapezoid. Answer: The three primary attributes of a trapezoid are: • The diagonals and base angles of an isosceles trapezoid are equal in length. • The opposite sides of an isosceles trapezoid are of the same length. It can also be said that the sides are congruent to one another. • The intersection point of the diagonals are collinear to the midpoints of the two opposite sides. Question 5. Calculate the Centroid of a Trapezoid, Which has the Following Dimensions. A = 12’, b = 5’, and h = 5’. Answer: We know that the values for a = 12’, b = 5’, and 5 = 5’ This means that if we use the centroid of the trapezoid formula, then we will get: X = {b + 2a / 3 (a + b)} x h X = {5 + 2 x 12 / 3 (12 + 5)} x 5 X = 2.84 Hence, the value of the centroid of the trapezoid is at a distance of 2.84’. Question 6. There is a Trapezoid in Which the Parallel Sides Measure 8 cm and 10 cm. The Height of the Trapezoid is 9 cm. Use this Information to Find the Centroid. Answer: Let’s assume that a and b are two parallel sides of a trapezoid. This means that a = 8 cm, b = 10 cm, and h = 9 cm. By using the centroid of a trapezoid formula, we will get: X = {b + 2a / 3 (a + b)} x h X = {10 + 2 (8) / 3 (8 + 10)} x 9 X = {26 / 54} x 9 X = 13 / 3 X = 4.33
{"url":"https://www.vedantu.com/maths/centroid-of-a-trapezoid","timestamp":"2024-11-11T10:50:23Z","content_type":"text/html","content_length":"246894","record_id":"<urn:uuid:265c2c4a-e8bb-4c44-b5c0-90a10a39fe23>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00783.warc.gz"}
What is the gain or loss on this retirement Business Finance Assignment Help - What is the gain or loss on this retirement Business Finance Assignment Help. What is the gain or loss on this retirement Business Finance Assignment Help. (/0x4*br /> A company has bonds outstanding with a par value of $100,000. The unamortized discount on these bonds is $4,800. The company retired these bonds by buying them on the open market at 97. What is the gain or loss on this retirement? $3,000 loss. $1,800 gain. $1,800 loss. $0 gain or loss. $3,000 gain. What is the gain or loss on this retirement Business Finance Assignment Help[supanova_question] If there is exactly one solution, use the graph to find it. (If there is no solu Mathematics Assignment Help 8x + 2y = 0 −20x − 5y = 18 If there is exactly one solution, use the graph to find it. (If there is no solution, enter NO SOLUTION. If there are infinitely many solutions, enter INFINITELY MANY.) If there is exactly one solution, use the graph to find it. (If there is no solu Mathematics Assignment Help If there is exactly one solution, use the graph to find it. (If there is no solution, enter NO SOLUTION. If there are infinitely many solutions, enter INFINITELY MANY.) If there is exactly one solution, use the graph to find it. (If there is no solu Mathematics Assignment Help If there is exactly one solution, use the graph to find it. (If there is no solution, enter NO SOLUTION. If there are infinitely many solutions, enter INFINITELY MANY.) If there is exactly one solution, use the graph to find it. (If there is no solu Mathematics Assignment Help If there is exactly one solution, use the graph to find it. (If there is no solution, enter NO SOLUTION. If there are infinitely many solutions, enter INFINITELY MANY.) prepare a memo for the new tax staff explaining some of the common terms, business and finance homework help Business Finance Assignment Help 1. types of tax rate structure the u.s. tax system apply. page 5 of 6 2. taxable income and how it is determined. 3. ways in which the applicable tax rate is determined. 4. tax liability, including how it is calculated using both the tax rate formula and the tax table. 5. example of how to calculate the tax liability using the tax rate table and the tax rate formula for a taxpayer with taxable income of $55,000, filing status married filing jointly. 6. discussion of marginal tax rate. prepare a memo for the new tax staff explaining some of the common terms, business and finance homework help Business Finance Assignment Help[supanova_question] Force exerted on an object Science Assignment Help A person pulls a loaded sled of mass m= 75 kg along a horizontal surface at constant velocity. Find the force exerted by the person. Find the net work done if the sled is pulled 10 m Can DNA helps determine the eye color with the formulas, or does it have to do with the pairing of pigments, biology homework help Science Assignment Help Must be 100% original and include reference where information came from. Can DNA helps determine the eye color with the formulas, or does it have to do with the pairing of pigments? Which of following best describes the solution to the system of equations, math homework help Mathematics Assignment Help 6x + 5y=86x + 5y=15 A)The system of equations has no solution. B)The system of equations has exactly one solution where x=0 and y=8/5 C)The system of equations has infinitely many solutions. D)The system of equations has exactly one solution where x=8 and y=15 prepare a memo for the new tax staff explaining some of the common terms, business and finance homework help Business Finance Assignment Help 1. types of tax rate structure the u.s. tax system apply. page 5 of 6 2. taxable income and how it is determined. 3. ways in which the applicable tax rate is determined. 4. tax liability, including how it is calculated using both the tax rate formula and the tax table. 5. example of how to calculate the tax liability using the tax rate table and the tax rate formula for a taxpayer with taxable income of $55,000, filing status married filing jointly. 6. discussion of marginal tax rate. What is the gain or loss on this retirement Business Finance Assignment Help What is the gain or loss on this retirement Business Finance Assignment Help
{"url":"https://anyessayhelp.com/what-is-the-gain-or-loss-on-this-retirement-business-finance-assignment-help/","timestamp":"2024-11-10T03:09:56Z","content_type":"text/html","content_length":"129738","record_id":"<urn:uuid:86c2323f-58a3-41fd-a344-c2e8b26ce79e>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00437.warc.gz"}
How to get the factorial of a number in C By definition, a Factorial of a non-negative integer is the product of all the positive integers less than or equal to n as represented in the following math notation: Factorials have a prominent place in mathematics as they are encountered in combinatorics, taylor expansions and in the number theory. For instance factorial of n is the number of ways one can arrange n different objects. If you are studying computer science, one of the most common tasks to solve in programming is how to obtain the factorial of a number. In this article, we'll explain how you can obtain the factorial of a positive integer number in C with a very simple logic. A. With iterations The easiest way to do and understand the logic to obtain a factorial from a n number is with a for loop. You will need to define a for loop that will iterate from 1 up to the given n number. On every iteration the fact variable that initially has the 1 value will be updated with the result of the multiplication of itself with the index of the current iteration. In the following example we'll prompt for the number to calculate and we'll print the result at the end: #include <stdio.h> int main() // Note that initially, the fact variable is equals to 1 int c, n, fact = 1; // Prompt user for the number to calculate, it can be statically defined as fact if you want. printf("Enter a number to calculate its factorial: \n"); scanf("%d", &n); // Calculate factorial for (c = 1; c <= n; c++){ fact = fact * c; // Print result printf("Factorial of %d is: %d\n", n, fact); return 0; You can convert it into a function if you want: #include <stdio.h> // declare method before using it to prevent error: conflicting types for 'factorial' long factorial(int); // Usage example: int main() int fact = 10; // Prints: Factorial of 10 is: 3628800 printf("Factorial of %d is: %d\n", fact, factorial(fact)); return 0; // Function that returns the factorial of a n number long factorial(int n) int c; long result = 1; for (c = 1; c <= n; c++){ result = result * c; return result; B. The recursive way In programming, the recursion is a technique in which a function calls itself, for example, in the following code example, the factorial function will call itself: #include <stdio.h> // declare method before using it to prevent error: conflicting types for 'factorial' long factorial(int); // Usage example: int main() int fact = 10; // Prints: Factorial of 10 is: 3628800 printf("Factorial of %d is: %d\n", fact, factorial(fact)); return 0; // Function that returns the factorial of a n number long factorial(int n) if (n == 0){ return 1; return(n * factorial(n-1)); Note that the definition of the function is necessary in the recursion. What would be the preferred way to proceed for you? Happy coding !
{"url":"https://ourcodeworld.com/articles/read/857/how-to-get-the-factorial-of-a-number-in-c","timestamp":"2024-11-04T07:42:24Z","content_type":"text/html","content_length":"105537","record_id":"<urn:uuid:595bbb50-4f02-4dcf-8d77-f69df6772726>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00574.warc.gz"}
Previous: cgecon Up: ../lapack-c.html Next: cgees CGEEQU - compute row and column scalings intended to equili- brate an M by N matrix A and reduce its condition number SUBROUTINE CGEEQU( M, N, A, LDA, R, C, ROWCND, COLCND, AMAX, INFO ) INTEGER INFO, LDA, M, N REAL AMAX, COLCND, ROWCND REAL C( * ), R( * ) COMPLEX A( LDA, * ) CGEEQU computes row and column scalings intended to equili- brate an M by N matrix A and reduce its condition number. R returns the row scale factors and C the column scale fac- tors, chosen to try to make the largest entry in each row and column of the matrix B with elements B(i,j)=R(i)*A(i,j)*C(j) have absolute value 1. R(i) and C(j) are restricted to be between SMLNUM = smallest safe number and BIGNUM = largest safe number. Use of these scaling factors is not guaranteed to reduce the condition number of A but works well in practice. M (input) INTEGER The number of rows of the matrix A. M >= 0. N (input) INTEGER The number of columns of the matrix A. N >= 0. A (input) COMPLEX array, dimension (LDA,N) The M-by-N matrix whose equilibration factors are to be computed. LDA (input) INTEGER The leading dimension of the array A. LDA >= R (output) REAL array, dimension (M) If INFO = 0 or INFO > M, R contains the row scale factors for A. C (output) REAL array, dimension (N) If INFO = 0, C contains the column scale factors for A. ROWCND (output) REAL If INFO = 0 or INFO > M, ROWCND contains the ratio of the smallest R(i) to the largest R(i). If ROWCND >= 0.1 and AMAX is neither too large nor too small, it is not worth scaling by R. COLCND (output) REAL If INFO = 0, COLCND contains the ratio of the smal- lest C(i) to the largest C(i). If COLCND >= 0.1, it is not worth scaling by C. AMAX (output) REAL Absolute value of largest matrix element. If AMAX is very close to overflow or very close to under- flow, the matrix should be scaled. INFO (output) INTEGER = 0: successful exit < 0: if INFO = -i, the i-th argument had an illegal > 0: if INFO = i, and i is <= M: the i-th row of A is exactly zero > M: the (i-M)-th column of A is exactly zero
{"url":"https://www.math.utah.edu/software/lapack/lapack-c/cgeequ.html","timestamp":"2024-11-08T04:26:21Z","content_type":"text/html","content_length":"4062","record_id":"<urn:uuid:86d6775f-b3f0-4fca-82ed-0030c1e3a60e>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00541.warc.gz"}
2.2: Linear Algebra Last updated Page ID \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\) \( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\) \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\) \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vectorC}[1]{\textbf{#1}} \) \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \) \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \) \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \) \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\ evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\ newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y} \) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real} {\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec} [3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array} {r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\ wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\ newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var} {\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\ bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\ widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\) We’ve seen that in quantum mechanics, the state of an electron in some potential is given by a wave function \(\psi(\vec x,t)\), and physical variables are represented by operators on this wave function, such as the momentum in the x-direction \(p_x =-i\hbar\partial/\partial x\). The Schrödinger wave equation is a linearequation, which means that if \(\psi_1\) and \(\psi_2\) are solutions, then so is \(c_1\psi_1+c_2\psi_2\), where \(c_1, c_2\) are arbitrary complex numbers. This linearity of the sets of possible solutions is true generally in quantum mechanics, as is the representation of physical variables by operators on the wave functions. The mathematical structure this describes, the linear set of possible states and sets of operators on those states, is in fact a linear algebra of operators acting on a vector space. From now on, this is the language we’ll be using most of the time. To clarify, we’ll give some definitions. What is a Vector Space? The prototypical vector space is of course the set of real vectors in ordinary three-dimensional space, these vectors can be represented by trios of real numbers \((v_1,v_2,v_3)\) measuring the components in the x, y and z directions respectively. The basic properties of these vectors are: • any vector multiplied by a number is another vector in the space, \(a(v_1,v_2,v_3)=(av_1,av_2,av_3)\); • the sum of two vectors is another vector in the space, that given by just adding the corresponding components together: \((v_1+w_1,v_2+w_2,v_3+w_3)\). These two properties together are referred to as “closure”: adding vectors and multiplying them by numbers cannot get you out of the space. • A further property is that there is a unique null vector \((0,0,0)\) and each vector has an additive inverse \((-v_1,-v_2,-v_3)\) which added to the original vector gives the null vector. Mathematicians have generalized the definition of a vector space: a general vector space has the properties we’ve listed above for three-dimensional real vectors, but the operations of addition and multiplication by a number are generalized to more abstract operations between more general entities. The operators are, however, restricted to being commutative and associative. Notice that the list of necessary properties for a general vector space does not include that the vectors have a magnitude—that would be an additional requirement, giving what is called a normed vector space. More about that later. To go from the familiar three-dimensional vector space to the vector spaces relevant to quantum mechanics, first the real numbers (components of the vector and possible multiplying factors) are to be generalized to complex numbers, and second the three-component vector goes an ncomponent vector. The consequent n-dimensional complex space is sufficient to describe the quantum mechanics of angular momentum, an important subject. But to describe the wave function of a particle in a box requires an infinite dimensional space, one dimension for each Fourier component, and to describe the wave function for a particle on an infinite line requires the set of all normalizable continuous differentiable functions on that line. Fortunately, all these generalizations are to finite or infinite sets of complex numbers, so the mathematicians’ vector space requirements of commutativity and associativity are always trivially satisfied. We use Dirac’s notation for vectors, \(|1\rangle,|2\rangle\) and call them “kets”, so, in his language, if \(|1\rangle,|2\rangle\) belong to the space, so does \(c_1|1\rangle +c_2|2\rangle\) for arbitrary complex constants \(c_1, c_2\). Since our vectors are made up of complex numbers, multiplying any vector by zero gives the null vector, and the additive inverse is given by reversing the signs of all the numbers in the vector. Clearly, the set of solutions of Schrödinger’s equation for an electron in a potential satisfies the requirements for a vector space: \(\psi(\vec x,t)\) is just a complex number at each point in space, so only complex numbers are involved in forming \(c_1\psi_1+c_2\psi_2\), and commutativity, associativity, etc., follow at once. Vector Space Dimensionality The vectors \( |1\rangle ,|2\rangle ,|3\rangle\) are linearly independent if \[ c_1|1\rangle +c_2|2\rangle +c_3|3\rangle =0 \tag{2.2.1}\] implies \[ c_1=c_2=c_3=0 \tag{2.2.2}\] A vector space is n-dimensional if the maximum number of linearly independent vectors in the space is n. Such a space is often called \(V^n(C)\), or \(V^n(R)\) if only real numbers are used. Now, vector spaces with finite dimension n are clearly insufficient for describing functions of a continuous variable x. But they are well worth reviewing here: as we’ve mentioned, they are fine for describing quantized angular momentum, and they serve as a natural introduction to the infinite-dimensional spaces needed to describe spatial wavefunctions. A set of n linearly independent vectors in n-dimensional space is a basis—any vector can be written in a unique way as a sum over a basis: \[ |V\rangle=\sum v_i|i\rangle \tag{2.2.3}\] You can check the uniqueness by taking the difference between two supposedly distinct sums: it will be a linear relation between independent vectors, a contradiction. Since all vectors in the space can be written as linear sums over the elements of the basis, the sum of multiples of any two vectors has the form: \[ a|V\rangle+b|W\rangle=\sum (av_i+bw_i)|i\rangle \ Inner Product Spaces The vector spaces of relevance in quantum mechanics also have an operation associating a number with a pair of vectors, a generalization of the dot product of two ordinary three-dimensional vectors, \[ \vec a, \vec b =\sum a_ib_i \tag{2.2.5}\] Following Dirac, we write the inner product of two ket vectors \(|V\rangle,|W\rangle\) as \(\langle W|V\rangle\). Dirac refers to this \(\langle \; | \; \rangle\) form as a “bracket” made up of a “bra” and a “ket”. This means that each ket vector \(|V\rangle\) has an associated bra\(\langle V|\). For the case of a real n-dimensional vector, \(|V\rangle,\langle V|\) are identical—but we require for the more general case that \[ \langle W|V\rangle=\langle V|W\rangle^*\tag{2.2.6}\] where \(*\) denotes complex conjugate. This implies that for a ket \((v_1,...,v_n)\) the bra will be \((v_1^*,...,v_n^*)\). (Actually, bras are usually written as rows, kets as columns, so that the inner product follows the standard rules for matrix multiplication.) Evidently for the n-dimensional complex vector \(\langle V|V\rangle\) is real and positive except for the null vector: \[ \langle V|V\rangle=\sum_1^n |v_i|^2 \tag{2.2.7}\] For the more general inner product spaces considered later we require \(\langle V|V\rangle\) to be positive, except for the null vector. (These requirements do restrict the classes of vector spaces we are considering—no Lorentz metric, for example—but they are all satisfied by the spaces relevant to nonrelativistic quantum mechanics.) The norm of \(|V\rangle\) is then defined by \[ |V|=\sqrt{\langle V|V\rangle} \tag{2.2.8}\] If \(|V\rangle\) is a member of \(V^n(C)\), so is \(a|V\rangle\), for any complex number \(a\). We require the inner product operation to commute with multiplication by a number, so \[ \langle W|(a|V\rangle)=a\langle W|V\rangle \tag{2.2.9}\] The complex conjugate of the right hand side is \(a^*\langle V|W\rangle\). For consistency, the bra corresponding to the ket \(a|V\rangle\) must therefore be \(\langle V|a^*\)—in any case obvious from the definition of the bra in n complex dimensions given above. It follows that if \[ |V\rangle=\sum v_i|i\rangle, \; |W\rangle=\sum w_i|i\rangle, \; then \; \langle V|W\rangle=\sum v_i^*w_j \langle i|j\rangle \tag{2.2.10}\] Constructing an Orthonormal Basis: the Gram-Schmidt Process To have something better resembling the standard dot product of ordinary three vectors, we need \(\langle i|j\rangle=\delta_{ij}\), that is, we need to construct an orthonormal basis in the space. There is a straightforward procedure for doing this called the Gram-Schmidt process. We begin with a linearly independent set of basis vectors, \(|1\rangle, |2\rangle, |3\rangle\),... . We first normalize \(|1\rangle\) by dividing it by its norm. Call the normalized vector \(|I\rangle\). Now \(|2\rangle\) cannot be parallel to \(|I\rangle\), because the original basis was of linearly independent vectors, but \(|2\rangle\) in general has a nonzero component parallel to \(|I\rangle\), equal to \(|I\rangle\langle I|2\rangle\), since \(|I\rangle\) is normalized. Therefore, the vector \(|2\rangle-|I\rangle\langle I|2\rangle\) is perpendicular to \(|I\rangle\), as is easily verified. It is also easy to compute the norm of this vector, and divide by it to get \(|II\rangle \), the second member of the orthonormal basis. Next, we take \(|3\rangle\) and subtract off its components in the directions \(|I\rangle\) and \(|II\rangle\), normalize the remainder, and so on. In an n-dimensional space, having constructed an orthonormal basis with members \(|i\rangle\), any vector \(|V\rangle\) can be written as a column vector, \[ |V\rangle= \sum v_i |i\rangle= \begin {pmatrix}v_1 \\ v_2 \\ . \\ . \\ v_n \end{pmatrix} \, , \; where \; |1\rangle= \begin{pmatrix}1 \\ 0 \\ . \\ . \\ 0 \end{pmatrix} \; and \: so \: on. \tag{2.2.11}\] The corresponding bra is \(\langle V|=\sum v_i^*\langle i|\), which we write as a row vector with the elements complex conjugated, \(\langle V|=(v_1^*,v_2^*,...v_n^*)\). This operation, going from columns to rows and taking the complex conjugate, is called taking the adjoint, and can also be applied to matrices, as we shall see shortly. The reason for representing the bra as a row is that the inner product of two vectors is then given by standard matrix multiplication: \[ \langle V|W\rangle=(v_1^*,v_2^*,...,v_n^*) \begin{pmatrix}w_1 \\ . \\ . \\ w_n \end{pmatrix} \tag{2.2.12}\] (Of course, this only works with an orthonormal base.) The Schwartz Inequality The Schwartz inequality is the generalization to any inner product space of the result \(|\vec a ,\vec b|^2 \le |\vec a|^2|\vec b|^2\) (or \(\cos^2 \theta \le1\) ) for ordinary three-dimensional vectors. The equality sign in that result only holds when the vectors are parallel. To generalize to higher dimensions, one might just note that two vectors are in a two-dimensional subspace, but an illuminating way of understanding the inequality is to write the vector \(\vec a\) as a sum of two components, one parallel to \(\vec b\) and one perpendicular to \(\vec b\). The component parallel to \(\vec b\) is just \(\vec b(\vec a\cdot \vec b)/|\vec b|^2\), so the component perpendicular to \(\vec b\) is the vector \(\vec a_{\bot}=\vec a-\vec b(\vec a\cdot\vec b)/|\vec b|^2 \). Substituting this expression into \(\vec a_{\bot}\cdot\vec a_{\bot} \ge0 \), the inequality follows. This same point can be made in a general inner product space: if \(|V\rangle\), \(|W\rangle\) are two vectors, then \[ |Z\rangle=|V\rangle-\frac{|W\rangle \langle W|V\rangle}{|W|^2} \tag{2.2.13}\] is the component of \(|V\rangle\) perpendicular to \(|W\rangle\), as is easily checked by taking its inner product with \(|W\rangle\). Then \[ \langle Z|Z\rangle \ge0 \;\; gives\; immediately\;\; |\langle V|W\rangle|^2 \le |V|^2|W|^2 \tag{2.2.14}\] Linear Operators A linear operator A takes any vector in a linear vector space to a vector in that space, \(A|V\rangle=|V'\rangle\) and satisfies \[A(c_1|V_1\rangle+c_2|V_2\rangle)= c_1A|V_1\rangle+c_2A|V_2\rangle \ with \(c_1\), \(c_2\) arbitrary complex constants. The identity operator \(I\) is (obviously!) defined by: \[ I|V\rangle=|V\rangle \;\; for \; all \; |V\rangle \tag{2.2.16}\] For an n-dimensional vector space with an orthonormal basis \(|1\rangle,...,|n\rangle\), since any vector in the space can be expressed as a sum \(|V\rangle=\sum v_i|i\rangle\), the linear operator is completely determined by its action on the basis vectors—this is all we need to know. It’s easy to find an expression for the identity operator in terms of bras and kets. Taking the inner product of both sides of the equation \(|V\rangle=\sum v_i|i\rangle\) with the bra \(\langle i|\) gives \(\langle i|V\rangle=v_i\), so \[ |V\rangle=\sum v_i|i\rangle=\sum |i\rangle\ langle i|V\rangle \tag{2.2.17}\] Since this is true for any vector in the space, it follows that that the identity operator is just \[ I=\sum_1^n |i\rangle\langle i| \tag{2.2.18}\] This is an important result: it will reappear in many disguises. To analyze the action of a general linear operator \(A\), we just need to know how it acts on each basis vector. Beginning with \(A|1\rangle\), this must be some sum over the basis vectors, and since they are orthonormal, the component in the \(|i\rangle\) direction must be just \(\langle i|A|1\rangle\). That is, \[ A|1\rangle=\sum_1^n |i\rangle\langle i|A|1\rangle=\sum_1^n A_{i1}|i\rangle\, ,\; writing\; \langle i|A|1\rangle =A_{i1} \tag{2.2.19}\] So if the linear operator A acting on \(|V\rangle=\sum v_i|i\rangle\) gives \(|V'\rangle=\sum v_i'|i\rangle\), that is, \(A|V\rangle=|V'\rangle\), the linearity tells us that \[ \sum v_i'|i\rangle=| V'\rangle=A|V\rangle=\sum v_j A|j\rangle= \sum_{i,j} v_j |i\rangle\langle i|A|j\rangle=\sum_{i,j} v_j A_{ij}|i\rangle \tag{2.2.20}\] where in the fourth step we just inserted the identity operator. Since the \(|i\rangle\)’s are all orthogonal, the coefficient of a particular \(|i\rangle\) on the left-hand side of the equation must be identical with the coefficient of the same \(|i\rangle\) on the right-hand side. That is, \(v_i'=A_{ij}v_j\). Therefore the operator \(A\) is simply equivalent to matrix multiplication: \[\begin{pmatrix}v_1'\\ v_2'\\ .\\ .\\ v_n'\end{pmatrix}= \begin{pmatrix} \langle1|A|1\rangle &\langle1|A|2\rangle & .& .&\langle1|A|n\rangle\\ \langle2|A|1\rangle &\langle2|A|2\rangle & .& .& .\\ .& .& .& .& .\\ . & .& .& .& .\\ \langle n|A|1\rangle &\langle n|A|2\rangle & .& .&\langle n|A|n\rangle \end{pmatrix} \begin{pmatrix}v_1\\ v_2\\ .\\ .\\ v_n\end{pmatrix} \tag{2.2.21}\] Evidently, then, applying two linear operators one after the other is equivalent to successive matrix multiplication—and, therefore, since matrices do not in general commute, nor do linear operators. (Of course, if we hope to represent quantum variables as linear operators on a vector space, this has to be true—the momentum operator \(p=-i\hbar d/dx\) certainly doesn’t commute with x!) Projection Operators It is important to note that a linear operator applied successively to the members of an orthonormal basis might give a new set of vectors which no longer span the entire space. To give an example, the linear operator \(|1\rangle\langle 1|\) applied to any vector in the space picks out the vector’s component in the \(|1\rangle\) direction. It’s called a projection operator. The operator \((|1\ rangle\langle 1|+|2\rangle\langle 2|)\) projects a vector into its components in the subspace spanned by the vectors \(|1\rangle\) and \(|2\rangle\), and so on—if we extend the sum to be over the whole basis, we recover the identity operator. Exercise: prove that the [] matrix representation of the projection operator \((|1\rangle\langle 1|+|2\rangle\langle 2|)\) has all elements zero except the first two diagonal elements, which are equal to one. There can be no inverse operator to a nontrivial projection operator, since the information about components of the vector perpendicular to the projected subspace is lost. The Adjoint Operator and Hermitian Matrices As we’ve discussed, if a ket \(|V\rangle\) in the n-dimensional space is written as a column vector with \(n\) (complex) components, the corresponding bra is a row vector having as elements the complex conjugates of the ket elements. \(\langle W|V\rangle=\langle V|W\rangle^*\) then follows automatically from standard matrix multiplication rules, and on multiplying \(|V\rangle\) by a complex number \(a\) to get \(a|V\rangle\) (meaning that each element in the column of numbers is multiplied by \(a\)) the corresponding bra goes to \(\langle V|a^*=a^*\langle V|\). But suppose that instead of multiplying a ket by a number, we operate on it with a linear operator. What generates the parallel transformation among the bras? In other words, if \(A|V\rangle=|V'\ rangle\), what operator sends the bra \(\langle V|\) to \(\langle V'|\)? It must be a linear operator, because \(A\) is linear, that is, if under \(A\) \(|V_1\rangle \to |V_1'\rangle\), \(|V_2\rangle \to |V_2'\rangle\) and \(|V_3\rangle=|V_1\rangle +|V_2\rangle\), then under \(A\) \(|V_3\rangle\) is required to got to \(|V_3'\rangle=|V_1'\rangle +|V_2'\rangle\). Consequently, under the parallel bra transformation we must have \(\langle V_1|\to \langle V_1'|\), \(\langle V_2|\to \langle V_2'|\) and \(\langle V_3|\to \langle V_3'|\),—the bra transformation is necessarily also linear. Recalling that the bra is an n-element row vector, the most general linear transformation sending it to another bra is an \(n\times n\) matrix operating on the bra from the right. This bra operator is called the adjoint of \(A\), written\(A^{\dagger}\). That is, the ket \(A|V\rangle\) has corresponding bra \(\langle V|A^{\dagger}\). In an orthonormal basis, using the notation \(\langle Ai|\) to denote the bra \(\langle i|A^{\dagger}\) corresponding to the ket \(A|i\rangle=|Ai\rangle\), say, \[ (A^{\dagger})_{ij}=\langle i|A^{\dagger}|j\rangle=\langle Ai|j\rangle=\langle j |Ai\rangle^*=A_{ji}^* \tag{2.2..22}\] So the adjoint operator is the transpose complex conjugate. Important: for a product of two operators (prove this!), \[ (AB)^{\dagger}=B^{\dagger}A^{\dagger} \tag{2.2..23}\] An operator equal to its adjoint \(A=A^{\dagger}\) is called Hermitian. As we shall find in the next lecture, Hermitian operators are of central importance in quantum mechanics. An operator equal to minus its adjoint, \(A=-A^{\dagger}\), is anti Hermitian (sometimes termed skew Hermitian). These two operator types are essentially generalizations of real and imaginary number: any operator can be expressed as a sum of a Hermitian operator and an anti Hermitian operator, \[ A=\frac{1}{2}(A+A^{\dagger})+\frac{1}{2}(A-A^{\dagger}) \tag{2.2.24}\] The definition of adjoint naturally extends to vectors and numbers: the adjoint of a ket is the corresponding bra, the adjoint of a number is its complex conjugate. This is useful to bear in mind when taking the adjoint of an operator which may be partially constructed of vectors and numbers, such as projection-type operators. The adjoint of a product of matrices, vectors and numbers is the product of the adjoints in reverse order. (Of course, for numbers the order doesn’t matter.) Unitary Operators An operator is unitary if \(U^{\dagger }U=1\). This implies first that \(U\) operating on any vector gives a vector having the same norm, since the new norm \(\langle V|U^{\dagger }U|V\rangle=\langle V|V\rangle\). Furthermore, inner products are preserved, \(\langle W|U^{\dagger }U|V\rangle=\langle W|V\rangle\). Therefore, under a unitary transformation the original orthonormal basis in the space must go to another orthonormal basis. Conversely, any transformation that takes one orthonormal basis into another one is a unitary transformation. To see this, suppose that a linear transformation \(A\) sends the members of the orthonormal basis \((|1\rangle_1,|2\rangle_1,...,|n\rangle_1)\) to the different orthonormal set \((|1\rangle_2,|2\rangle_2,...,|n\rangle_2)\), so \(A|1\rangle_1=|1\rangle_2\), etc. Then the vector \ (|V\rangle= \sum v_i |i\rangle_1\) will go to \(|V'\rangle=A|V\rangle=\sum v_i |i\rangle_2\), having the same norm, \(\langle V'|V'\rangle= \langle V|V\rangle=\sum |v_i|^2\). A matrix element \(\ langle W'|V'\rangle= \langle W|V\rangle=\sum w_i^*v_i\), but also \(\langle W'|V'\rangle=\langle W|A^{\dagger}A|V\rangle\). That is, \(\langle W|V\rangle= \langle W|A^{\dagger}A|V\rangle\) for arbitrary kets \(|V\rangle, \: |W\rangle\). This is only possible if \(A^{\dagger}A=1\), so \(A\) is unitary. A unitary operation amounts to a rotation (possibly combined with a reflection) in the space. Evidently, since \(U^{\dagger}U=1\), the adjoint \(U^{\dagger}\) rotates the basis back—it is the inverse operation, and so \(UU^{\dagger}=1\) also, that is, \(U\) and \(U^{\dagger}\) commute. We review in this section the determinant of a matrix, a function closely related to the operator properties of the matrix. Let’s start with \(2\times2\) matrices: \[ A=\begin{pmatrix} a_{11} &a_{12} \\ a_{21} &a_{22} \end{pmatrix} \tag{2.2.25}\] The determinant of this matrix is defined by: \[ \det A=|A|=a_{11}a_{22}-a_{12}a_{21} \tag{2.2.26}\] Writing the two rows of the matrix as vectors: \[ \vec a_1^R=(a_{11},a_{12}) \\ \vec a_2^R=(a_{21},a_{22}) \tag{2.2.27}\] (\(R\) denotes row), \(\det A=\vec a_1^R \times \vec a_2^R\) is just the area (with appropriate sign) of the parallelogram having the two row vectors as adjacent sides: This is zero if the two vectors are parallel (linearly dependent) and is not changed by adding any multiple of \(\vec a_2^R\) to \(\vec a_2^R\) (because the new parallelogram has the same base and the same height as the original—check this by drawing). Let’s go on to the more interesting case of \(3\times3\) matrices: \[ A=\begin{pmatrix} a_{11}&a_{12}&a_{13} \\ a_{21}&a_{22}&a_{23} \\ a_{31}&a_{32}&a_{33} \end{pmatrix} \tag{2.2.28}\] The determinant of \(A\) is defined as \[ \det A=\varepsilon_{ijk}a_{1i}a_{2j}a_{3k} \tag{2.2.29}\] where \(\varepsilon_{ijk}=0\) if any two are equal, +1 if \(ijk = 123, \; 231 \; or\; 312\) (that is to say, an even permutation of 123) and –1 if \(ijk\) is an odd permutation of 123. Repeated suffixes, of course, imply summation here. Writing this out explicitly, \[ \det A= a_{11}a_{22}a_{33}+a_{21}a_{32}a_{13}+a_{31}a_{12}a_{23}-a_{11}a_{32}a_{23}-a_{21}a_{12}a_{33}-a_{31}a_{22}a_{13} \tag{2.2.30}\] Just as in two dimensions, it’s worth looking at this expression in terms of vectors representing the rows of the matrix \[ \vec a_1^R=(a_{11},a_{12},a_{13}) \\ \vec a_2^R=(a_{21},a_{22},a_{23}) \\ \ vec a_3^R=(a_{31},a_{32},a_{33}) \tag{2.2.31}\] so \[ A= \begin{pmatrix} \vec a_1^R\\ \vec a_2^R\\ \vec a_3^R \end{pmatrix} \: , \; and \; we \; see \; that \; \det A=(\vec a_1^R \times \vec a_2^R)\cdot \vec a_3^R \tag{2.2.32}\] This is the volume of the parallelopiped formed by the three vectors being adjacent sides (meeting at one corner, the origin). This parallelepiped volume will of course be zero if the three vectors lie in a plane, and it is not changed if a multiple of one of the vectors is added to another of the vectors. That is to say, the determinant of a matrix is not changed if a multiple of one row is added to another row. This is because the determinant is linear in the elements of a single row, \[ \det \begin{pmatrix} \vec a_1^R+\lambda\vec a_2^R \\ \vec a_2^R \\ \vec a_3^R \end{pmatrix}=\det \begin{pmatrix} \vec a_1^R\\ \vec a_2^R \\ \vec a_3^R \end{pmatrix} +\lambda\det \begin{pmatrix} \vec a_2^R\\ \vec a_2^R\\ \vec a_2^R \end{pmatrix} \tag{2.2.33}\] and the last term is zero because two rows are identical—so the triple vector product vanishes. A more general way of stating this, applicable to larger determinants, is that for a determinant with two identical rows, the symmetry of the two rows, together with the antisymmetry of \(\ varepsilon_{ijk}\), ensures that the terms in the sum all cancel in pairs. Since the determinant is not altered by adding some multiple of one row to another, if the rows are linearly dependent, one row could be made identically zero by adding the right multiples of the other rows. Since every term in the expression for the determinant has one element from each row, the determinant would then be identically zero. For the three-dimensional case, the linear dependence of the rows means the corresponding vectors lie in a plane, and the parallelepiped is flat. The algebraic argument generalizes easily to \(n\times n\) determinants: they are identically zero if the rows are linearly dependent. The generalization from \(3\times3\) to \(n\times n\) [] determinants is that \(\det A=\varepsilon_{ijk}a_{1i}a_{2j}a_{3k}\) becomes: \[ \det A=\varepsilon_{ijk...p}a_{1i}a_{2j}a_{3k}...a_{np} \tag{2.2.34}\] where \(ijk...p\) is summed over all permutations of \(132...n\), and the \(\varepsilon\) symbol is zero if any two of its suffixes are equal, +1 for an even permutation and -1 for an odd permutation. (Note: any permutation can be written as a product of swaps of neighbors. Such a representation is in general not unique, but for a given permutation, all such representations will have either an odd number of elements or an even number.) An important theorem is that for a product of two matrices \(A\), \(B\) the determinant of the product is the product of the determinants, \(\det AB=\det A\times \det B\). This can be verified by brute force for \(2\times2\) matrices, and a proof in the general case can be found in any book on mathematical physics (for example, Byron and Fuller). It can also be proved that if the rows are linearly independent, the determinant cannot be zero. (Here’s a proof: take an \(n\times n\) matrix with the \(n\) row vectors linearly independent. Now consider the components of those vectors in the \(n – 1\) dimensional subspace perpendicular to \ ((1, 0, ... ,0)\). These \(n\) vectors, each with only \(n – 1\) components, must be linearly dependent, since there are more of them than the dimension of the space. So we can take some combination of the rows below the first row and subtract it from the first row to leave the first row \((a, 0, 0, ... ,0)\), and a cannot be zero since we have a matrix with \(n\) linearly independent rows. We can then subtract multiples of this first row from the other rows to get a determinant having zeros in the first column below the first row. Now look at the \(n – 1\) by \(n – 1\) determinant to be multiplied by \(a\). Its rows must be linearly independent since those of the original matrix were. Now proceed by induction.) To return to three dimensions, it is clear from the form of \[ \det A= a_{11}a_{22}a_{33}+a_{21}a_{32}a_{13}+a_{31}a_{12}a_{23}-a_{11}a_{32}a_{23}-a_{21}a_{12}a_{33}-a_{31}a_{22}a_{13} \tag{2.2.30}\] that we could equally have taken the columns of \(A\) as three vectors, \(A=(\vec a_1^C, \vec a_2^C, \vec a_3^C) \) in an obvious notation, \(\det A=(\vec a_1^C \times \vec a_2^C)\cdot \vec a_3^C\), and linear dependence among the columns will also ensure the vanishing of the determinant—so, in fact, linear dependence of the columns ensures linear dependence of the rows. This, too, generalizes to \(n\times n\): in the definition of determinant \(\det A=\varepsilon_{ijk...p}a_{1i}a_{2j}a_{3k}...a_{np}\), the row suffix is fixed and the column suffix goes over all permissible permutations, with the appropriate sign—but the same terms would be generated by having the column suffixes kept in numerical order and allowing the row suffix to undergo the An Aside: Reciprocal Lattice Vectors It is perhaps worth mentioning how the inverse of a \(3\times 3\) matrix operator can be understood in terms of vectors. For a set of linearly independent vectors \((\vec a_1, \vec a_2, \vec a_3)\), a reciprocal set \((\vec b_1, \vec b_2, \vec b_3)\) can be defined by \[ \vec b_1 =\frac{\vec a_2\times \vec a_3}{\vec a_1\times \vec a_2 \cdot \vec a_3} \tag{2.2.35}\] and the obvious cyclic definitions for the other two reciprocal vectors. We see immediately that \[\vec a_i\cdot \vec b_j =\delta_{ij} \tag{2.2.36}\] from which it follows that the inverse matrix to \[ A=\begin{pmatrix} \vec a_1^R\\ \vec a_2^R \\ \vec a_3^R \end{pmatrix} \; is \; B=\begin{pmatrix}\vec b_1^C& \vec b_2^C& \vec b_3^C\end{pmatrix} \ (These reciprocal vectors are important in x-ray crystallography, for example. If a crystalline lattice has certain atoms at positions \(n_1\vec a_1 +n_2\vec a_2+n_3\vec a_3\), where \(n_1, n_2, n_3 \) are integers, the reciprocal vectors are the set of normals to possible planes of the atoms, and these planes of atoms are the important elements in the diffractive x-ray scattering.) Eigenkets and Eigenvalues If an operator \(A\) operating on a ket \(|V\rangle\) gives a multiple of the same ket, \[ A|V\rangle =\lambda|V\rangle \tag{2.2.38}\] then \(|V\rangle\) is said to be an eigenket (or, just as often, eigenvector, or eigenstate!) of \(A\) with eigenvalue \(\lambda\). Eigenkets and eigenvalues are of central importance in quantum mechanics: dynamical variables are operators, a physical measurement of a dynamical variable yields an eigenvalue of the operator, and forces the system into an eigenket. In this section, we shall show how to find the eigenvalues and corresponding eigenkets for an operator \(A\). We’ll use the notation \(A|a_i\rangle =a_i|a_i\rangle\) for the set of eigenkets \(|a_i\ rangle\) with corresponding eigenvalues \(a_i\). (Obviously, in the eigenvalue equation here the suffix \(i\) is not summed over.) The first step in solving \(A|V\rangle =\lambda|V\rangle\) is to find the allowed eigenvalues \(a_i\). Writing the equation in matrix form: \[ \begin{pmatrix} A_{11}-\lambda & A_{12} &.&.& A_{1n} \\ A_{21} & A_{22}-\lambda &.&.&. \\ .&.&.&.&. \\ .&.&.&.&. \\ A_{n1} &.&.&.& A_{nn}-\lambda \end{pmatrix} \begin{pmatrix} v_1 \\ v_2 \\ .\\ .\\ v_n \end{pmatrix} =0 \tag{2.2.39}\] This equation is actually telling us that the columns of the matrix \(A-\lambda I\) are linearly dependent! To see this, write the matrix as a row vector each element of which is one of its columns, and the equation becomes \[ (\vec M_1^C,\vec M_2^C,...,\vec M_n^C) \begin{pmatrix} v_1\\ .\\ .\\ .\\ v_n \end{pmatrix}=0 \tag{2.2.40}\] which is to say \[ v_1\vec M_1^C+v_2\vec M_2^C+...+v_n\vec M_n^C=0 \tag{2.2.41}\] the columns of the matrix are indeed a linearly dependent set. We know that means the determinant of the matrix \(A-\lambda I\) is zero, \[ \begin{vmatrix} A_{11}-\lambda & A_{12} &.&.& A_{1n} \\ A_{21} & A_{22}-\lambda &.&.&. \\ .&.&.&.&. \\ .&.&.&.&. \\ A_{n1} &.&.&.& A_{nn}-\lambda \end{vmatrix}=0 \tag{2.2.42}\] Evaluating the determinant using \(\det A=\varepsilon_{ijk...p}a_{1i}a_{2j}a_{3k}....a_{np}\) gives an \(n^{th}\) order polynomial in \(\lambda\) sometimes called the characteristic polynomial. Any polynomial can be written in terms of its roots: \[ C(\lambda-a_1)(\lambda-a_2)....(\lambda-a_n)=0 \tag{2.2.43}\] where the \(a_i\)'s, the roots of the polynomial, and \(C\) is an overall constant, which from inspection of the determinant we can see to be \((-1)^n\). (It’s the coefficient of \(\lambda^n\).) The polynomial roots (which we don’t yet know) are in fact the eigenvalues. For example, putting \(\lambda=a_1\) in the matrix, \(\det (A-a_1I)=0\), which means that \((A-a_1I)|V\rangle=0\) has a nontrivial solution \(|V\rangle\), and this is our eigenvector \(|a_1\rangle\). Notice that the diagonal term in the determinant \((A_{11}-\lambda)(A_{22}-\lambda)....(A_{nn}-\lambda)\) generates the leading two orders in the polynomial \((-1)^n(\lambda^{n}-(A_{11}+...+A_{nn})\ lambda^{n-1})\), (and some lower order terms too). Equating the coefficient of \(\lambda^{n-1}\) [] here with that in \((-1)^n(\lambda-a_1)(\lambda-a_2)....(\lambda-a_n)\), \[ \sum_{i=1}^n a_i=\sum_ {i=1}^n A_{ii}= Tr A \tag{2.2.44}\] Putting \(\lambda=0\) in both the determinantal and the polynomial representations (in other words, equating the \(\lambda\)-independent terms), \[ \prod_{i=1}^n a_i=\det A \tag{2.2.45}\] So we can find both the sum and the product of the eigenvalues directly from the determinant, and for a \(2\times 2\) matrix this is enough to solve the problem. For anything bigger, the method is to solve the polynomial equation \(\det (A-\lambda I)=0\) to find the set of eigenvalues, then use them to calculate the corresponding eigenvectors. This is done one at a time. Labeling the first eigenvalue found as \(a_1\), the corresponding equation for the components \(v_i\) v[i] of the eigenvector \(|a_1\rangle\) is \[ \begin{pmatrix} A_{11}-a_1 & A_{12} &.&.& A_{1n} \\ A_{21} & A_{22}-a_1 &.&.&. \\ .&.&.&.&. \\ .&.&.&.&. \\ A_{n1} &.&.&.& A_{nn}-a_1 \end{pmatrix} \begin{pmatrix} v_1 \\ v_2 \\ .\\ .\\ v_n \end{pmatrix} =0 \tag{2.2.46}\] This looks like \(n\) equations for the \(n\) numbers \(v_i\), but it isn’t: remember the rows are linearly dependent, so there are only \(n–1\) independent equations. However, that’s enough to the ratios of the vector components \(v_1,...,v_n\), then finally the eigenvector is normalized. The process is then repeated for each eignevalue. (Extra care is needed if the polynomial has coincident roots—we’ll discuss that case later.) Eigenvalues and Eigenstates of Hermitian Matrices For a Hermitian matrix, it is easy to establish that the eigenvalues are always real. (Note: A basic postulate of Quantum Mechanics, discussed in the next lecture, is that physical observables are represented by Hermitian operators.) Taking (in this section) \(A\) to be hermitian,\(A=A^{\dagger}\), and labeling the eigenkets by the eigenvalue, that is, \[ A|a_1\rangle=a_1|a_1\rangle \tag the inner product with the bra \(\langle a_1|\) gives \(\langle a_1|A|a_1\rangle=a_1\langle a_1|a_1\rangle\). But the inner product of the adjoint equation (remembering \(A=A^{\dagger}\)) \[ \langle a_1|A=a_1^*\langle a_1| \tag{2.2.48}\] with \(|a_1\rangle\) gives \(\langle a_1|A|a_1\rangle=a_1^*\langle a_1|a_1\rangle\), so \(a_1=a_1^*\), and all the eigenvalues must be real. They certainly don’t have to all be different—for example, the unit matrix \(I\) is Hermitian, and all its eigenvalues are of course 1. But let’s first consider the case where they are all different. It’s easy to show that the eigenkets belonging to different eigenvalues are orthogonal. If \[ \begin{matrix} A|a_1\rangle=a_1|a_1\rangle \\ A|a_2\rangle=a_2|a_2\rangle \end{matrix} \tag{2.2.49}\] take the adjoint of the first equation and then the inner product with \(|a_2\rangle\), and compare it with the inner product of the second equation with \(\langle a_1|\): \[ \langle a_1|A|a_2\rangle =a_1\langle a_1|a_2\rangle=a_2\langle a_1|a_2\rangle \tag{2.2.50}\] so \(\langle a_1|a_2\rangle=0\) unless the eigenvalues are equal. (If they are equal, they are referred to as degenerate eigenvalues.) Let’s first consider the nondegenerate case: \(A\) has all eigenvalues distinct. The eigenkets of \(A\), appropriately normalized, form an orthonormal basis in the space. Write \[ |a_1\rangle=\begin{pmatrix} v_{11}\\ v_{21}\\ \vdots\\ v_{n1}\end{pmatrix},\; and\, consider\, the\, matrix\; V=\begin{pmatrix} v_{11}&v_{12}&\dots&v_{1n} \\ v_{21}&v_{22}&\dots&v_{2n}\\ \ vdots&\vdots&\ddots&\vdots \\ v_{n1}&v_{n2}&\dots&v_{nn} \end{pmatrix}=\begin{pmatrix}|a_1\rangle & |a_2\rangle & \dots & |a_n\rangle \end{pmatrix} \tag{2.2.51}\] Now \[ AV=A\begin{pmatrix}|a_1\rangle & |a_2\rangle & \dots & |a_n\rangle \end{pmatrix}=\begin{pmatrix}a_1|a_1\rangle & a_2|a_2\rangle & \dots & a_n|a_n\rangle \end{pmatrix} \tag{2.2.52}\] so \[ V^{\dagger}AV=\begin{pmatrix} \langle a_1|\\ \langle a_2|\\ \vdots\\ \langle a_n|\end{pmatrix}\begin{pmatrix}a_1|a_1\rangle & a_2|a_2\rangle & \dots & a_n|a_n\rangle \end{pmatrix}=\begin {pmatrix} a_1&0&\dots&0 \\ 0&a_2&\dots&0\\ \vdots&\vdots&\ddots&\vdots \\ 0&0&\dots&a_n \end{pmatrix} \tag{2.2.53}\] Note also that, obviously, \(V\) is unitary: \[ V^{\dagger}V=\begin{pmatrix} \langle a_1|\\ \langle a_2|\\ \vdots\\ \langle a_n|\end{pmatrix}\begin{pmatrix}|a_1\rangle & |a_2\rangle & \dots & |a_n\ rangle \end{pmatrix}=\begin{pmatrix} 1&0&\dots&0 \\ 0&1&\dots&0\\ \vdots&\vdots&\ddots&\vdots \\ 0&0&\dots&1\end{pmatrix} \tag{2.2.54}\] We have established, then, that for a Hermitian matrix with distinct eigenvalues (nondegenerate case), the unitary matrix \(V\) having columns identical to the normalized eigenkets of \(A\) diagonalizes \(A\), that is, \(V^{\dagger}AV\) is diagonal. Furthermore, its (diagonal) elements equal the corresponding eigenvalues of \(A\). Another way of saying this is that the unitary matrix \(V\) is the transformation from the original orthonormal basis in ths space to the basis formed of the normalized eigenkets of \(A\). Proof that the Eigenvectors of a Hermitian Matrix Span the Space We’ll now move on to the general case: what if some of the eigenvalues of \(A\) are the same? In this case, any linear combination of them is also an eigenvector with the same eigenvalue. Assuming they form a basis in the subspace, the Gram Schmidt procedure can be used to make it orthonormal, and so part of an orthonormal basis of the whole space. However, we have not actually established that the eigenvectors do form a basis in a degenerate subspace. Could it be that (to take the simplest case) the two eigenvectors for the single eigenvalue turn out to be parallel? This is actually the case for some \(2\times2\) matrices—for example, \(\begin{pmatrix}1&1\\0&1\end{pmatrix}\), we need to prove it is not true for Hermitian matrices, and nor are the analogous statements for higher-dimensional degenerate subspaces. A clear presentation is given in Byron and Fuller, section 4.7. We follow it here. The procedure is by induction from the \(2\times2\) case. The general \(2\times2\) Hermitian matrix has the form \[ \begin{pmatrix}a&b\\b^*&c\end{pmatrix} \tag{2.2.55}\] where \(a\), \(c\) are real. It is easy to check that if the eigenvalues are degenerate, this matrix becomes a real multiple of the identity, and so trivially has two orthonormal eigenvectors. Since we already know that if the eigenvalues of a \(2\times2\) Hermitian matrix are distinct it can be diagonalized by the unitary transformation formed from its orthonormal eigenvectors, we have established that any \(2\times2\) Hermitian matrix can be so diagonalized. To carry out the induction process, we now assume any \((n-1)\times(n-1)\) Hermitian matrix can be diagonalized by a unitary transformation. We need to prove this means it’s also true for an \(n\ times n\) Hermitian matrix \(A\). (Recall a unitary transformation takes one complete orthonormal basis to another. If it diagonalizes a Hermitian matrix, the new basis is necessarily the set of orthonormalized eigenvectors. Hence, if the matrix can be diagonalized, the eigenvectors do span the n-dimensional space.) Choose an eigenvalue \(a_1\) of \(A\), with normalized eigenvector \(|a_1\rangle=(v_{11},v_{21},....,v_{n1})^T\). (We put in \(T\) for transpose, to save the awkwardness of filling the page with a few column vectors.) We construct a unitary operator \(V\) by making this the first column, then filling in with \(n-1\) other normalized vectors to construct, with \(|a_1\rangle\), an n-dimensional orthonormal basis. Now, since \(A|a_1\rangle=a_1|a_1\rangle\), the first column of the matrix \(AV\) will just be \(a_1|a_1\rangle\), and the rows of the matrix \(V^{\dagger}=V^{-1}\) will be \(\langle a_1|\) followed by \(n-1\) normalized vectors orthogonal to it, so the first column of the matrix \(V^{\dagger}AV\) [] will be \(a_1\) followed by zeros. It is easy to check that \(V^{\dagger}AV\) is Hermitian, since \(A\) is, so its first row is also zero beyond the first diagonal term. This establishes that for an \(n\times n\) Hermitian matrix, a unitary transformation exists to put it in the form: \[ V^{\dagger}AV=\begin{pmatrix} a_1 &0&.&.&0\\ 0& M_{22}&.&.&M_{2n} \\ 0&.&.&.&. \ \ 0&.&.&.&. \\ 0 &M_{n2}&.&.& M_{nn} \end{pmatrix} \tag{2.2.56}\] But we can now perform a second unitary transformation in the \((n-1)\times(n-1)\) subspace orthogonal to \(|a_1\rangle\) (this of course leaves \(|a_1\rangle\) invariant), to complete the full diagonalization—that is to say, the existence of the \((n-1)\times(n-1)\) diagonalization, plus the argument above, guarantees the existence of the \(n\times n\) diagonalization: the induction is Diagonalizing a Hermitian Matrix As discussed above, a Hermitian matrix is diagonal in the orthonormal basis of its set of eigenvectors: \(|a_1\rangle,|a_2\rangle,...,|a_n\rangle\), since \[ \langle a_i|A|a_j\rangle=\langle a_i|a_j| a_j\rangle=a_j\langle a_i|a_j\rangle=a_j\delta_{ij} \tag{2.2.57}\] If we are given the matrix elements of \(A\) in some other orthonormal basis, to diagonalize it we need to rotate from the initial orthonormal basis to one made up of the eigenkets of \(A\). Denoting the initial orthonormal basis in the standard fashion \[ |1\rangle=\begin{pmatrix} 1\\0\\0\\ \vdots\\0\end{pmatrix}, \; |2\rangle=\begin{pmatrix} 0\\1\\0\\ \vdots\\0\end{pmatrix}, \; |i\ rangle=\begin{pmatrix} 0\\ \vdots\\ 1\\ \vdots\\0\end{pmatrix}... \; (1\, in\, i^{th}\, place\, down), \; |n\rangle=\begin{pmatrix} 0\\0\\0\\ \vdots\\1\end{pmatrix} \tag{2.2.58}\] the elements of the matrix are \(A_{ij}=\langle i|A|j\rangle\). A transformation from one orthonormal basis to another is a unitary transformation, as discussed above, so we write it \[ |V\rangle \to |V'\rangle=U|V\rangle \tag{2.2.59}\] Under this transformation, the matrix element \[ \langle W|A|V\rangle \to \langle W'|A|V'\rangle=\langle W|U^{\dagger}AU|V\rangle \tag{2.2.60}\] So we can find the appropriate transformation matrix \(U\) by requiring that \(U^{\dagger}AU\) [] be diagonal with respect to the original set of basis vectors. (Transforming the operator in this way, leaving the vector space alone, is equivalent to rotating the vector space and leaving the operator alone. Of course, in a system with more than one operator, the same transformation would have to be applied to all the operators). In fact, just as we discussed for the nondegenerate (distinct eigenvalues) case, the unitary matrix \(U\) we need is just composed of the normalized eigenkets of the operator \(A\), \[ U=(|a_1\ rangle,|a_2\rangle,...,|a_n\rangle) \tag{2.2.61}\] And it follows as before that \[ (U^{\dagger}AU)_{ij}=\langle a_i|a_j|a_j\rangle=\delta_{ij}a_j, \; a\, diagonal\, matrix. \tag{2.2.62}\] (The repeated suffixes here are of course not summed over.) If some of the eigenvalues are the same, the Gram Schmidt procedure may be needed to generate an orthogonal set, as mentioned earlier. Functions of Matrices The same unitary operator \(U\) that diagonalizes an Hermitian matrix \(A\) will also diagonalize \(A^2\), because \[ U^{-1}A^2U=U^{-1}AAU=U^{-1}AUU^{-1}AU \tag{2.2.63}\] so \[ U^{\dagger}A^2U=\begin{pmatrix} a_1^2&0&0&.&0 \\ 0&a_2^2&0&.&0\\ 0&0&a_3^2&.&0 \\ .&.&.&.&. \\ 0&.&.&.&a_n^2\end{pmatrix} \tag{2.2.64}\] Evidently, this same process works for any power of \(A\), and formally for any function of \(A\) expressible as a power series, but of course convergence properties need to be considered, and this becomes trickier on going from finite matrices to operators on infinite spaces. Commuting Hermitian Matrices From the above, the set of powers of an Hermitian matrix all commute with each other, and have a common set of eigenvectors (but not the same eigenvalues, obviously). In fact it is not difficult to show that any two Hermitian matrices that commute with each other have the same set of eigenvectors (after possible Gram Schmidt rearrangements in degenerate subspaces). If two \(n\times n\) Hermitian matrices \(A\), \(B\) commute, that is, \(AB=BA\), and \(A\) has a nondegenerate set of eigenvectors \(A|a_i\rangle=a_i|a_i\rangle\), then \(AB|a_i\rangle=BA|a_i\rangle =Ba_i|a_i\rangle=a_iB|a_i\rangle\), that is, \(B|a_i\rangle\) is an eigenvector of \(A\) with eigenvalue \(a_i\). Since \(A\) is nondegenerate, \(B|a_i\rangle\) must be some multiple of \(|a_i\rangle \), and we conclude that \(A\), \(B\) have the same set of eigenvectors. Now suppose \(A\) is degenerate, and consider the \(m\times m\) []subspace \(S_{a_i}\) spanned by the eigenvectors \(|a_i,1\rangle,\; |a_i,2\rangle,...\) of \(A\) having eigenvalue \(a_i\). Applying the argument in the paragraph above, \(B|a_i,1\rangle,\; B|a_i,2\rangle,...\) must also lie in this subspace. Therefore, if we transform \(B\) with the same unitary transformation that diagonalized \ (A\), \(B\) will not in general be diagonal in the subspace \(S_{a_i}\), but it will be what is termed block diagonal, in that if \(B\) operates on any vector in \(S_{a_i}\) it gives a vector in \(S_ \(B\) can be written as two diagonal blocks: one \(m\times m\), one \((n-m)\times (n-m)\), with zeroes outside these diagonal blocks, for example, for \(m=2,\; n=5\): \[ \begin{pmatrix} b_{11}&b_{12} &0&0&0 \\ b_{21}&b_{22}&0&0&0 \\ 0&0&b_{33}&b_{34}&b_{35} \\ 0&0&b_{43}&b_{44}&b_{45} \\ 0&0&b_{53}&b_{54}&b_{55} \end{pmatrix} \tag{2.2.65}\] And, in fact, if there is only one degenerate eigenvalue that second block will only have nonzero terms on the diagonal: \[ \begin{pmatrix} b_{11}&b_{12}&0&0&0 \\ b_{21}&b_{22}&0&0&0 \\ 0&0&b_3&0&0 \ \ 0&0&0&b_4&0 \\ 0&0&0&0&b_5 \end{pmatrix} \tag{2.2.65}\] \(B\) therefore operates on two subspaces, one m-dimensional, one (n-m)-dimensional, independently—a vector entirely in one subspace stays there. This means we can complete the diagonalization of \(B\) with a unitary operator that only operates on the \(m\times m\) block \(S_{a_i}\). Such an operator will also affect the eigenvectors of \(A\), but that doesn’t matter, because all vectors in this subspace are eigenvectors of \(A\) with the same eigenvalue, so as far as \(A\) is concerned, we can choose any orthonormal basis we like—the basis vectors will still be eigenvectors. This establishes that any two commuting Hermitian matrices can be diagonalized at the same time. Obviously, this can never be true of noncommuting matrices, since all diagonal matrices commute. Diagonalizing a Unitary Matrix Any unitary matrix can be diagonalized by a unitary transformation. To see this, recall that any matrix \(M\) can be written as a sum of a Hermitian matrix and an anti Hermitian matrix, \[ M=\frac {M+M^{\dagger}}{2}+\frac{M-M^{\dagger}}{2}=A+iB \tag{2.2.66}\] where both \(A,\; B\) are Hermitian. This is the matrix analogue of writing an arbitrary complex number as a sum of real and imaginary parts. If \(A,\; B\) commute, they can be simultaneously diagonalized (see the previous section), and therefore \(M\) can be diagonalized. Now, if a unitary matrix is expressed in this form \(U=A+iB\) with \(A,\; B\) Hermitian, it easily follows from \(UU^{\dagger}=U^{\dagger}U=1\) that \(A,\; B\) commute, so any unitary matrix \(U\) can be diagonalized by a unitary transformation. More generally, if a matrix \(M\) commutes with its adjoint \(M^{\dagger}\), it can be diagonalized. (Note: it is not possible to diagonalize \(M\) unless both \(A,\; B\) are simultaneously diagonalized. This follows from \(U^{\dagger}AU,\; U^{\dagger}iBU\) being Hermitian and antiHermitian for any unitary operator \(U\), so their off-diagonal elements cannot cancel each other, they must all be zero if M has been diagonalized by \(U\), in which case the two transformed matrices \(U^{\dagger}AU, \; U^{\dagger}iBU\) are diagonal, therefore commute, and so do the original matrices \(A,\; B\).) It is worthwhile looking at a specific example, a simple rotation of one orthonormal basis into another in three dimensions. Obviously, the axis through the origin about which the basis is rotated is an eigenvector of the transformation. It’s less clear what the other two eigenvectors might be—or, equivalently, what are the eigenvectors corresponding to a two dimensional rotation of basis in a plane? The way to find out is to write down the matrix and diagonalize it. The matrix \[ U(\theta)=\begin{pmatrix} \cos \theta &\sin \theta\\ -\sin \theta &\cos \theta\end{pmatrix} \tag{2.2.67}\] Note that the determinant is equal to unity. The eigenvalues are given by solving \[ \begin{vmatrix} \cos \theta -\lambda &\sin \theta\\ -\sin \theta &\cos \theta -\lambda\end{vmatrix}=0\; to\, give \; \lambda=e^{\pm i\theta} \tag{2.2.68}\] The corresponding eigenvectors satisfy \[ \begin{pmatrix} \cos \theta &\sin \theta\\ -\sin \theta &\cos \theta\end{pmatrix}\dbinom{u_1^{\pm}}{u_2^{\pm}}=e^{\pm i\theta}\dbinom{u_1^{\pm}}{u_2^{\pm}} \tag{2.2.69}\] The eigenvectors, normalized, are: \[ \dbinom{u_1^{\pm}}{u_2^{\pm}}=\frac{1}{\sqrt{2}}\dbinom{1}{\pm i} \tag{2.2.70}\] Note that, in contrast to a Hermitian matrix, the eigenvalues of a unitary matrix do not have to be real. In fact, from \(U^{\dagger}U=1\), sandwiched between the bra and ket of an eigenvector, we see that any eigenvalue of a unitary matrix must have unit modulus—it’s a complex number on the unit circle. With hindsight, we should have realized that one eigenvalue of a two-dimensional rotation had to be \(e^{i\theta}\), the product of two two-dimensional rotations is given be adding the angles of rotation, and a rotation through \(\pi\) changes all signs, so has eigenvalue \(-1\). Note that the eigenvector itself is independent of the angle of rotation—the rotations all commute, so they must have common eigenvectors. Successive rotation operators applied to the plus eigenvector add their angles, when applied to the minus eigenvector, all angles are subtracted.
{"url":"https://phys.libretexts.org/Bookshelves/Quantum_Mechanics/Quantum_Mechanics_(Fowler)/02%3A_Some_Essential_Math/2.02%3A_Linear_Algebra","timestamp":"2024-11-11T16:41:06Z","content_type":"text/html","content_length":"185141","record_id":"<urn:uuid:343c01c1-dbcd-4834-8364-7603fe684f0d>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00523.warc.gz"}
Rotational Motion We have now covered the three basic elements of mechanics; force, energy and momentum. In all of our discussion so far we have considered an object moving from one location to another. However, there are other types of motion than this. For example and object can rotate, the object is clearly moving, but the center of the object stays at the same location. The motion we have discussed up to now is called translational motion, when the center of mass of the object moves location. We now will discuss rotational motion, when an object rotates around a fixed point. Of course objects can have both translational and rotational motion and the same time and often do. We separate them here just to make it easier to focus on the rotational part. All of the physics we have learned still applies in exactly the same way for rotational motion. The major difference in rotational motion is that the cartesian coordinate system we have been using of x,y,z doesn’t make as much sense. Instead it makes more sense to use polar coordinates R,Θ. If an object is in fixed rotation then its distance to the axis of rotation, R, does not change. The only coordinate that changes with time is the angle, Θ. To describe rotational kinematics we would therefore want to use an angular velocity and angular acceleration defined by: $\vec{\omega} = {d\vec{\theta}\over{dt}}$$\vec{\alpha} = {d\vec{\omega}\over{dt}}$ For angular quantities we will want to use radians instead of degrees, so a full circle is 2π instead of 360º. The units of Θ will be radians or rad, for angular velocity, ω, they will be rad/s and for angular acceleration, α, it will be rad/s². If we consider one point on our rotating object then we can connect angular quantities to translational quantities by multiplying by R for that point. So the distance the point travels would be RΘ, the velocity of that point would be Rω and the tangential acceleration of that point would be Rα. Note that the point already has a centripetal acceleration because it is undergoing circular motion. The tangential acceleration is the the change in velocity orthogonal to the centripetal acceleration. The above equations for angular velocity and acceleration are identical to equations we had in translational kinematics with just changes in the variables. This means solutions to these equations must also be identical. So for constant angular acceleration we have: $\omega = \omega_0 + \alpha t$$\theta = \theta_0 + \omega_0 t + \frac{1}{2}\alpha t^2$ $\omega^2 = \omega_0^2 + 2\alpha (\theta - \theta_0)$$\bar{\omega} = \frac{1}{2}(\omega_0 + \omega)$ Now that we have discussed angular kinematics let us turn to angular force which is called torque. Torque must be defined so that it is proportional to the angular acceleration. To do this we must define torque as: $\vec{\tau} = \vec{R} \times \vec{F} = RF\sin{\phi}$ where torque is the cross product between the a radial vector from the axis of rotation and the force. Since we haven’t discussed cross products yet, for now we can just define torque as the magnitude of the distance to the axis of rotation times the magnitude of the force times the sin of the angle between them. Notice we have defined that angle as φ because we are using Θ as a coordinate. Now that we have torque and angular acceleration we only need the angular equivalent of mass to get Newton’s second law. This is called the moment of inertia of an object and is given $I = \sum_i m_i R_i$ or $I = \int \rho(R)R^2 dR$ For all simple geometries this integral was solved long ago and you can just look up what common moments of inertia are. For example for a disk it is I=½MR². You can find more than you will need on Wikipedias List of moments of inertia. With torque and moment of inertia defined we can now right Newton’s second law for rotating objects. $\sum \vec{\tau} = I \vec{\omega}$ Here is an example of torque from the movie The Dark Knight. If an object is rotating then it will take work to stop that rotation. That means there must be rotational kinetic energy. $KE = \frac{1}{2} I \omega^2$ Which is just what you would expect replacing the translation quantities by their rotational equivalents. Rotational kinetic energy is just another form of energy, when energy is conserved the rotational kinetic energy must come from or go to potential energy. Or rotational kinetic energy can be converted into translational kinetic energy. Often an object will have both translational and rotational motion, like the wheel of a bike. There is a special case of this called rolling without slipping. This occurs when the objects rolling causes it to move. So then the distance it goes x = RΘ, it has velocity v=Rω and acceleration a=Rα. Note that this is different then when we had the same formulas above; here we are talking about the center of mass motion of the object. Before we were just talking about the motion of a point on a rotating object whose center is at rest. Here is an example of rotational kinetic energy and rolling without slipping from the movie the Princess Finally we get to angular momentum. Angular momentum is related to the net torque just like momentum was related to net force. The letter L is commonly used for angular momentum. The angular momentum of a rotating object is given by $\vec{L} = I\vec{\omega}.$ When we introduced translational momentum we saw that Newton’s second law could be written in terms of the derivative of momentum. The same thing now holds for angular momentum $\sum \tau = dL/dt$. And thus if there is no net torque on an object the angular momentum is conserved. Note that there could be a net force on an object, but no net torque or a net torque but no net force. So angular momentum conservation doesn’t imply momentum conservation or vice versa. If we have whose center of mass is moving we can still discuss its angular momentum. In this case is depends on what axis of rotation one is considering and then the angular momentum is given by $\vec{L} = \vec{R} \times \vec{p} = Rmv\sin{\phi}$
{"url":"https://openlab.citytech.cuny.edu/amaller1441/lecture-notes/rotational-motion/","timestamp":"2024-11-02T15:00:05Z","content_type":"text/html","content_length":"96633","record_id":"<urn:uuid:d5beb462-af2d-430f-8468-a4d4fdd0beb2>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00891.warc.gz"}
next → ← prev Linear Programming: Definition, Methods and Problems Linear Programming (LP) is a numerical enhancement procedure intended to expand or limit a straight goal capability subject to a bunch of straight uniformity and disparity imperatives. Presented during the twentieth 100 years, LP has turned into a basic apparatus in different fields, including tasks research, financial aspects, money, and designing. At its center, LP includes pursuing ideal choices in circumstances where assets are restricted. The expression "straight" alludes to the linearity of both the goal capability and the imperatives, implying that the connections between choice factors are corresponding and added substance. The goal capability addresses the amount to be augmented or limited, like benefit or cost, while the limitations characterize the limits inside which the choice factors should work. The graphical portrayal of LP issues frequently includes the making of a doable locale, the convergence, everything being equal, and the ideal arrangement is found at the outrageous point inside this district. Nonetheless, as issues fill in intricacy, further developed calculations like the Simplex Strategy or inside point techniques are utilized for proficient and exact arrangements. Linear Programming has far reaching applications, going from creation arranging and asset allotment to portfolio enhancement and store network the board. Its flexibility and capacity to address genuine difficulties make it a foundation in the field of improvement and navigation. Definition of Linear Programming A numerical improvement technique called linear programming, or LP, is used in a few disciplines, including tasks research, the board, financial matters, and designing, to assist with navigation. Boosting or limiting a goal capability that is direct while considering a bunch of straight constraints is the fundamental objective of direct programming. In this unique circumstance, the expressions "linear" and "added substance" portray the cooperations. An expression for a linear programming issue in its general form is as follows: Maximize or Minimize: c₁x₁ + c₂x₂ + ... + cnxn Subject to: Basic Components of Linear Programming • Variables of Decision (X): Theory: The basic objects that stand in for the quantities that the decision-maker wants to ascertain are called decision variables. They serve as the foundation for the optimisation process and are the factors that the decision-maker may influence. Significance: By encapsulating the core of the choice issue, decision variables enable the development of mathematical connections that simulate the decision-making process. • Goal Function (Z): Theory: The decision-maker's objective is quantified by a mathematical expression called the objective function. It specifies the criterion that has to be minimised or maximised for optimisation. Significance: The goal function directs the optimisation process towards a certain result that is consistent with the decision-maker's goals by offering a quantitative and measurable aim. • Limitations: Theory: The variables used to make decisions are limited or restricted by constraints. They play a crucial role in defining the viable zone and guaranteeing that the solution meets realistic, practical requirements. Importance: Constraints aid in the modelling of constraints or constraints imposed by resources, the environment, or other variables. They reduce the range of potential solutions to workable ones. • Limitations on non-negativity: Theory: Decision variables are prevented from taking on negative values by non-negativity restrictions. This is consistent with the pragmatic view that actions, resources, or numbers can never be Significance: By limiting parameters to not negative numbers, the model avoids unrealistic or unworkable solutions by keeping it rooted in the actual world. Types of Linear Programming • Classic Linear Programming: The goal of ordinary linear programming is to optimise a linear function while taking linear restrictions into account. Use Cases: Suitable for a wide range of industries' resource allocation, production scheduling, and optimisation issues. • Integer Linear Programming (ILP): A variation on linear programming in which the decision variables can only have integer values. Use Cases: Often used in discrete optimisation issues such as network design and project scheduling, this technique is helpful when decision variables have to be full integers. • Binary Linear Programming: A variant of linear integer programming in which the decision variables can only have values that are binary (0 or 1). Use Cases: Frequently utilised in binary choice issues, including binary optimisation challenges, network design, logistics, and yes/no or on/off scenarios. • Mixed-Integer Linear Programming (MILP): This linear programming paradigm integrates both continous and integer decision variables. Use Cases: Practical in situations requiring certain decision variables to have integer values while allowing others to have continuous values. • Multi-objective Linear Programming: Involves simultaneously optimising a number of linear objective functions, each of which represents a distinct aim or set of requirements. Use Cases: Beneficial in situations where decision-makers must weigh trade-offs between competing goals. • Dynamic Linear Programming: Takes changes over time into account and extends linear code to dynamic and timing-dependent contexts. Use Cases: Frequently seen in project management, inventory and production control, and time-sensitive resource allocation. Applications of Linear Programming • Production Scheduling: When figuring out the best combination of items to produce in order to maximise profit or minimise expenses while taking resource limits into account, linear programming is frequently used to optimise production processes. • Supply chain management and logistics: In order to save transportation costs, lower storage costs, and increase supply chain efficiency overall, it helps optimise distribution networks, inventory management, and transportation routes. • Finance and Optimising Investment Portfolios: LP is used to optimise investment portfolio returns while respecting restrictions such asset allocation guidelines, budgetary limits, and risk tolerance. • Campaigns for Marketing and Advertising: It helps to maximise reach, impact, or consumer engagement by distributing resources across many channels in order to optimise marketing and advertising expenditures. • Allocating Resources in Agriculture: In order to maximise agricultural output or profit, farmers might utilise linear programming to optimise the allocation of resources like labour, land, and fertilisers. • Project Timetable: In order to save time and expenses while achieving project deadlines and limits, LP assists project managers in effectively scheduling tasks and allocating resources. Types of Linear Programming Problems: • Problem of Linear Maximisation: The aim is to optimise an objective function that is linear while taking into account a set of linear restrictions. Example: Finding the best production mix while taking resource constraints into account in order to maximise profit. • Problem of Linear Minimization: The purpose is to meet a set of linear constraints and minimise a linear objective function. Example: Cost minimization within a network of transportation systems with capacity restrictions. • Problem in Standard Form Linear Programming: The aim is to represent a linear programming issue in standard form where the decision factors are non-negative and all constraints are inequalities. Example: Standard form solution to a mixed inequality constraint issue. • Linear Programming Problem in Canonical Form: Like standard form, but with all restrictions expressed as equations. Giving an example of a problem where constraints on equality are dominant. • Problem with Feasibility: Determining if a workable solution can be found within the specified parameters. Example: Determining if the solution to a system of linear equations fulfils every condition. • Problem of Unbounded Linear Programming: The goal function may be constantly improved, and the viable zone is unlimited. An infinite profit potential resulting from surplus resources in a manufacturing problem is an example. Methods used for solving Linear Programming • Visual Approach: This makes it appropriate for situations when there are two choice variables. The best answer is located at the point of intersection of constraints, and the viable region is visually shown. Applicability: Restricted to issues with few variables and restrictions. • The Simplex Method : This extensively utilised approach addresses linear programming issues involving any quantity of variables. Until an ideal solution is found, it repeatedly advances from one viable area vertex to Applicability: Fit for moderately big issues with a reasonable amount of constraints and variables. • Dual Simplex Approach: A variation on the simplex approach that is very helpful in resolving unbounded or impractical linear programming issues. Applicability: Good when there are issues with the conventional simplex approach. • Interior Point Techniques: Instead of navigating the vertices, these approaches travel through the centre of the smallest feasible region. For massive linear programming issues, they are effective. Applicability: Fits well with issues involving plenty of variables and restrictions. • Dual Simplex Approach: A variation on the simplex approach that is very helpful in resolving unbounded or impractical linear programming issues. Applicability: Good when there are issues with the conventional simplex approach. • Interior Point Techniques: Instead of navigating the vertices, these approaches travel through the centre of the smallest feasible region. For massive linear programming issues, they are effective. Applicability: Fits well with issues involving plenty of variables and restrictions. • Bound and Branch: An algorithmic method that gradually breaks down the viable region into smaller issues and removes smaller problems that are unable to accommodate the best solution. Application: Frequently employed in the resolution of mixed-integer linear programme (MILP) issues. • Algorithms with genetics: Natural selection serves as the inspiration for these optimisation strategies. They work with a population of prospective solutions that improves over successive generations. Application: Especially helpful in solving intricate and non-linear optimisation issues. • Gradient Drop: This is an iterative optimisation procedure that determines the direction of the steepest climb or fall based on the slope of the objective function. Application: Convex issues are a good fit, especially in optimisation and machine learning. • Karmarkar's Formula An interior-point method for linear programming based on a polynomial-time approach. Application: Suitable for complex linear programming issues. Next TopicData Ingestion ← prev next →
{"url":"https://www.javatpoint.com/linear-programming-definition-methods-and-problems","timestamp":"2024-11-12T19:56:40Z","content_type":"text/html","content_length":"64618","record_id":"<urn:uuid:eda8fda6-de57-4fe5-933c-24736bf816bd>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00786.warc.gz"}
Day Of Week Functions This page describes a number of formulas and VBA functions you can use when working with days of the week. Many, if not most, workbooks work with dates in one fashion or another. This page describes 9 functions, implemented both as worksheet formulas and as VBA functions, that you can use when working with days of the week. In all functions, the day of the week is expressed as a number, where 1 = Sunday, 2 = Monday, ..., 7 = Saturday. It should also be noted that most of the functions use a modulus operation, Excel's MOD worksheet function. The MOD worksheet function and the VBA Mod operator can produce different results, specifically when negative numbers are involved. In order to maintain logical continuity between the worksheet formulas presented here and their VBA function equivalents, the VBA code uses a function named WSMod that behaves the same as the MOD worksheet function. If you write your own VBA functions based on the code provided here, you will want to use the WSMod function rather than VBA's Mod operator. You can download the XLS workbook with all the formulas and VBA code, or you can download the BAS module file containing the VBA code. The following formulas and VBA functions are described on this page and are available in the download files. • DaysOfWeekBetweenTwoDates This returns the number of Day Of Week days between two dates. For example, the number of Tuesdays between 15-Jan-2009 and 26-July-2010. • DaysOfWeekInMonth This returns the number of a given Day Of Week in a given month and year. For example, the number of Tuesdays in April, 2009. • DateOfPreviousDayOfWeek This returns the date of the first Day Of Week before a given date. For example, the date of the Tuesday before 15-June-2009. • DateOfNextDayOfWeek This returns the date of the first Day Of Week following a given date. For example, the date of the first Tuesday after 15-June-2009. • FirstDayOfWeekInMonth This returns the date of the first Day Of Week day in a given month and year. For example, the date of the first Friday in March, 2010. • LastDayOfWeekInMonth This returns the date of the last Day Of Week day in a given month and year. For exampe, the date of the last Friday in May, 2009. • NthDayOfWeekInMonth This returns the date of the Nth Day Of Week day in a given month and year. For example, the date of the third Friday in May, 2009. • FirstDayOfWeekOfYear This returns the date of the first Day Of Week day of a given year. For example, the date of the first Friday in 2009. • LastDayOfWeekOfYear This returns the date of the last Day Of Week day of a given year. For example, the date of the last Monday in 2009. These functions, in both worksheet formula and VBA implementations, are described below. The WSMod function, which is used in place of VBA's Mod operator, is as follows: Function WSMod(Number As Double, Divisor As Double) As Double ' WSMod ' The Excel worksheet function MOD and the VBA Mod operator ' work differently and can return different results under ' certain circumstances. For continuity between the worksheet ' formulas and the VBA code, we use this WSMod function, which ' produces the same result as the Excel MOD worksheet function, ' rather than the VBA Mod operator. WSMod = Number - Divisor * Int(Number / Divisor) End Function This returns the number of Day Of Week days between two dates. For example, the number of Tuesdays between 6-Jan-2009 and 31-Jan-2009 is 4. In VBA, Public Function DaysOfWeekBetweenTwoDates(StartDate As Date, _ EndDate As Date, DayOfWeek As VbDayOfWeek) As Variant ' DaysOfWeekBetweenTwoDate ' This function returns the number of DaysOfWeek between StartDate and ' EndDate. StartDate is the first date, EndDate is the last date, and ' DayOfWeek is an long between 1 and 7 (1 = Sunday, 2 = Monday, ... ' 7 = Saturday). If StartDate is later than EndDate, the result is #NUM!. ' If DayOfWeek is out of range, the result is #VALUE. ' Note that this function uses WSMod to use Excel's worksheet function MOD ' rather than VBA's Mod operator. ' Worksheet function equivalent: ' =((EndDate-MOD(WEEKDAY(EndDate)-DayOfWeek,7)-StartDate- ' MOD(DayOfWeek-WEEKDAY(StartDate)+7,7))/7)+1 If StartDate > EndDate Then DaysOfWeekBetweenTwoDates = CVErr(xlErrNum) Exit Function End If If (DayOfWeek < vbSunday) Or (DayOfWeek > vbSaturday) Then DaysOfWeekBetweenTwoDates = CVErr(xlErrValue) Exit Function End If If (StartDate < 0) Or (EndDate < 0) Then DaysOfWeekBetweenTwoDates = CVErr(xlErrValue) Exit Function End If DaysOfWeekBetweenTwoDates = _ ((EndDate - WSMod(Weekday(EndDate) - DayOfWeek, 7) - StartDate - _ WSMod(DayOfWeek - Weekday(StartDate) + 7, 7)) / 7) + 1 End Function This returns the number of Day Of Week days in a given month and year. For example, the number of Sundays in January, 2009, is 5. In VBA, Public Function DaysOfWeekInMonth(MMonth As Long, YYear As Long, _ DayOfWeek As VbDayOfWeek) As Variant ' DaysOfWeekInMonth ' This function returns the number of DaysOfWeek in the month MMonth in ' year YYear. If either the MMonth or YYear value is out of range, the ' result is #VALUE. ' Note that this function uses WSMod to use Excel's worksheet function MOD ' rather than VBA's Mod operator. ' Formula equivalent: ' =((DATE(YYear,MMonth+1,0)-MOD(WEEKDAY(DATE(YYear,MMonth+1,0))-DayOfWeek,7)- ' DATE(YYear,MMonth,1)-MOD(DayOfWeek-WEEKDAY(DATE(YYear,MMonth,1))+7,7))/7)+1 If (MMonth < 1) Or (MMonth > 12) Then DaysOfWeekInMonth = CVErr(xlErrValue) Exit Function End If If (YYear < 1900) Or (YYear > 9999) Then DaysOfWeekInMonth = CVErr(xlErrValue) Exit Function End If If (DayOfWeek < vbSunday) Or (DayOfWeek > vbSaturday) Then DaysOfWeekInMonth = CVErr(xlErrValue) Exit Function End If DaysOfWeekInMonth = ((DateSerial(YYear, MMonth + 1, 0) - _ WSMod(Weekday(DateSerial(YYear, MMonth + 1, 0)) - DayOfWeek, 7) - _ DateSerial(YYear, MMonth, 1) - WSMod(DayOfWeek - _ Weekday(DateSerial(YYear, MMonth, 1)) + 7, 7)) / 7) + 1 End Function This function returns the date of the first Day Of Week day prior to a given date. For example, the Tuesday prior to 31-Jan-2009 is 27-Jan-2009. In VBA, Public Function PreviousDayOfWeek(StartDate As Date, _ DayOfWeek As VbDayOfWeek) As Variant ' PreviousDayOfWeek ' This function returns the date of the DayOfWeek prior to StartDate. ' Note that this function uses WSMod to use Excel's worksheet function MOD ' rather than VBA's Mod operator. ' Formula equivalent: ' =StartDate-MOD(WEEKDAY(StartDate)-DayOfWeek,7) If (DayOfWeek < vbSunday) Or (DayOfWeek > vbSaturday) Then PreviousDayOfWeek = CVErr(xlErrValue) Exit Function End If If (StartDate < 0) Then PreviousDayOfWeek = CVErr(xlErrValue) Exit Function End If PreviousDayOfWeek = StartDate - WSMod(Weekday(StartDate) - DayOfWeek, 7) End Function This returns the date of the first Day Of Week day following a given date. The Sunday following 15-Jan-2009 is 18-Jan-2009. In VBA, Public Function NextDayOfWeek(StartDate As Date, _ DayOfWeek As VbDayOfWeek) As Variant ' NextDayOfWeek ' This function returns the date of the DayOfWeek following StartDate. ' Note that this function uses WSMod to use Excel's worksheet function MOD ' rather than VBA's Mod operator. ' Formula equivalent: ' =StartDate+MOD(DayOfWeek-WEEKDAY(StartDate),7) If (DayOfWeek < vbSunday) Or (DayOfWeek > vbSaturday) Then NextDayOfWeek = CVErr(xlErrValue) Exit Function End If If (StartDate < 0) Then NextDayOfWeek = CVErr(xlErrValue) Exit Function End If NextDayOfWeek = StartDate + WSMod(DayOfWeek - Weekday(StartDate), 7) End Function This returns then first Day Of Week in a given month and year. For example, the first Sunday in June, 2009, is 7-June-2009. In VBA, Public Function FirstDayOfWeekInMonth(MMonth As Long, YYear As Long, _ DayOfWeek As VbDayOfWeek) As Variant ' This returns the date of the first DayOfWeek in month MM in year YYYY. ' Formula equivalent: ' =DATE(YYear,MMonth,1)+(MOD(DayOfWeek-WEEKDAY(DATE(YYear,MMonth,1)),7)) If (DayOfWeek < vbSunday) Or (DayOfWeek > vbSaturday) Then FirstDayOfWeekInMonth = CVErr(xlErrValue) Exit Function End If If (MMonth < 1) Or (MMonth > 12) Then FirstDayOfWeekInMonth = CVErr(xlErrValue) Exit Function End If FirstDayOfWeekInMonth = DateSerial(YYear, MMonth, 1) + _ WSMod(DayOfWeek - Weekday(DateSerial(YYear, MMonth, 1)), 7) End Function This returns the last Day Of Week day in a given month and year. For example, the last Wednesday in November, 2009, is 28-November-2009. In VBA, Public Function LastDayOfWeekInMonth(MMonth As Long, YYear As Long, _ DayOfWeek As VbDayOfWeek) As Variant ' LastDayOfWeekInMonth ' This returns the date of the last DayOfWeek in month MM in year YYYY. ' Formula equivalent: ' =DATE(YYear,MMonth+1,0)-ABS(WEEKDAY(DATE(YYear,MMonth+1,0))-DayOfWeek) If (DayOfWeek < vbSunday) Or (DayOfWeek > vbSaturday) Then LastDayOfWeekInMonth = CVErr(xlErrValue) Exit Function End If LastDayOfWeekInMonth = DateSerial(YYear, MMonth + 1, 0) - _ Abs(Weekday(DateSerial(YYear, MMonth + 1, 0)) - DayOfWeek) End Function This returns the Nth Day Of Week day in a given month and year. For example, the third Thursday of September, 2009, is 17-Sept-2009. In VBA, Public Function NthDayOfWeekInMonth(MMonth As Long, YYear As Long, _ DayOfWeek As VbDayOfWeek, Nth As Long) As Variant ' NthDayOfWeekInMonth ' This returns the Nth Day Of Week in month MM in year YYYY. ' Formula equivalent: ' =DATE(YYear,MMonth,1)+(MOD(DayOfWeek-WEEKDAY(DATE(YYear,MMonth,1)),7))+(7*(Nth-1)) If (MMonth < 1) Or (MMonth > 12) Then NthDayOfWeekInMonth = CVErr(xlErrValue) Exit Function End If If (YYear < 1900) Or (YYear > 9999) Then NthDayOfWeekInMonth = CVErr(xlErrValue) Exit Function End If If (DayOfWeek < vbSunday) Or (DayOfWeek > vbSaturday) Then NthDayOfWeekInMonth = CVErr(xlErrValue) Exit Function End If If Nth < 0 Then NthDayOfWeekInMonth = CVErr(xlErrValue) Exit Function End If NthDayOfWeekInMonth = DateSerial(YYear, MMonth, 1) + _ (WSMod(DayOfWeek - Weekday(DateSerial(YYear, MMonth, 1)), 7)) + _ (7 * (Nth - 1)) End Function This returns the date of the first Day Of Week day in a given year. For example, the first Tuesday in 2009 is 6-Jan-2009. In VBA, Public Function FirstDayOfWeekOfYear(YYear As Long, DayOfWeek As VbDayOfWeek) As Variant ' FirstDayOfWeekOfYear ' This returns the date of the first DayOfWeek in the year YYear. ' Formula equivalent: ' =DATE(YYear,1,1)+MOD(DayOfWeek-WEEKDAY(DATE(YYear,1,1)),7) If (YYear < 1900) Or (YYear > 9999) Then FirstDayOfWeekOfYear = CVErr(xlErrValue) Exit Function End If If (DayOfWeek < vbSunday) Or (DayOfWeek > vbSaturday) Then FirstDayOfWeekOfYear = CVErr(xlErrValue) Exit Function End If FirstDayOfWeekOfYear = DateSerial(YYear, 1, 1) + _ WSMod(DayOfWeek - Weekday(DateSerial(YYear, 1, 1)), 7) End Function This returns the date of the last Day Of Week day in a given year. For example, the last Wednesday in 2009 is 30-Dec-2009. In VBA, Public Function LastDayOfWeekOfYear(YYear As Long, DayOfWeek As VbDayOfWeek) As Variant ' LastDayOfWeekOfYear ' This returns the last DayOfWeek of the year YYear. ' Formula equivalent: ' =DATE(YYear,12,31)-MOD(WEEKDAY(DATE(YYear,12,31))-DayOfWeek,7) If (YYear < 1900) Or (YYear > 9999) Then LastDayOfWeekOfYear = CVErr(xlErrValue) Exit Function End If If (DayOfWeek < vbSunday) Or (DayOfWeek > vbSaturday) Then LastDayOfWeekOfYear = CVErr(xlErrValue) Exit Function End If LastDayOfWeekOfYear = DateSerial(YYear, 12, 31) - _ WSMod(Weekday(DateSerial(YYear, 12, 31)) - DayOfWeek, 7) End Function You can download the XLS workbook with all the formulas and VBA codem, or you can download the BAS module file containing the VBA code. This page last updated: 15-August-2009.
{"url":"http://www.cpearson.com/Excel/DayOfWeekFunctions.aspx","timestamp":"2024-11-08T02:07:17Z","content_type":"text/html","content_length":"47729","record_id":"<urn:uuid:c0a17b5e-8bd4-44ad-bc5c-d2b96e1a690e>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00453.warc.gz"}
Bulk Renormalization Group Flows and Bo SciPost Submission Page Bulk Renormalization Group Flows and Boundary States in Conformal Field Theories by John Cardy This Submission thread is now published as Submission summary Authors (as registered SciPost users): John Cardy Submission information Preprint Link: http://arxiv.org/abs/1706.01568v4 (pdf) Date accepted: 2017-08-02 Date submitted: 2017-07-31 02:00 Submitted by: Cardy, John Submitted to: SciPost Physics Ontological classification Academic field: Physics • Condensed Matter Physics - Theory Specialties: • Mathematical Physics Approach: Theoretical We propose using smeared boundary states $e^{-\tau H}|\cal B\rangle$ as variational approximations to the ground state of a conformal field theory deformed by relevant bulk operators. This is motivated by recent studies of quantum quenches in CFTs and of the entanglement spectrum in massive theories. It gives a simple criterion for choosing which boundary state should correspond to which combination of bulk operators, and leads to a rudimentary phase diagram of the theory in the vicinity of the RG fixed point corresponding to the CFT, as well as rigorous upper bounds on the universal amplitude of the free energy. In the case of the 2d minimal models explicit formulae are available. As a side result we show that the matrix elements of bulk operators between smeared Ishibashi states are simply given by the fusion rules of the CFT. List of changes In response to Anonymous Report 1: - Comparison with work of Fateev: sentence added at end that this would be interesting to do. - on p.3, discussion of correspondence between RG sinks and boundary states in Ising expanded. - after eqs (5,6) it is made clear when I am specializing to 1+1 dimensions. - in sec.2, T and Tbar are defined. - after eq, 6 it is explained in the text what is the direction of quantization, rather than introducing a new figure. - in eq (26) the rescaling of E_a is stated explicitly - typo after eq (23) corrected. In response to Paul Fendley: 1. It should work in principle for all RCFTS but e.g even the boundary states haven't been worked out in general. I have inserted wording that there is no obstacle in principle, as far as I know. 2. Are 1-point functions known for boundary states in integrable cases? I don't think so. 3. Thanks, this point now emphasized and reference to Huse added at this point. 4. Levin et al state in their first sentence: `the Casimir force between parallel plates is attractive'. They then look at other geometries, not relevant to this work. 5. Thanks, I have now included discussion using Affleck's identification of boundary states, which I agree is more intuitive. 6. I have added to the caption, hopefully making what is a rather stretched comparison (which came from a comment from G Vidal) more comprehensible. 7. I added references to my 1989 paper and also a good review by Petkova and Zuber. 8. Thanks, yes, corrected. Published as SciPost Phys. 3, 011 (2017)
{"url":"https://scipost.org/submissions/1706.01568v4/","timestamp":"2024-11-04T20:14:05Z","content_type":"text/html","content_length":"31385","record_id":"<urn:uuid:5e6ca9c5-9c46-452b-943a-c0c0445a118e>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00405.warc.gz"}
Implementing a correct 33-year calendar reform Subject: Implementing a correct 33-year calendar reform Date: Wednesday, 09 Oct 1996 (finalised) From: Simon Cassidy <simoncas@pacbell.net> To: East Carolina University Calendar Discussion List I wll take up from where I left off, (answering Rick McCarty's query about how I think a 33-year calendar could work), by repeating the relevant significant paragraph at the end of my "Thirty-three year calendars" message: >Let me be clear that I am not proposing that such a scheme (of moving the >leap-day around the year by stepping it through some eight stations in the >calendar scheme) is an appropriate scheme for reforming our current >calendar (though neo-pagans may get very enthusiastic about the idea) nor >should you suppose that this was the secret scheme, and perfect rival to >the Gregorian calendar reform, which I have discovered was contemplated, and >suppressed, in both Vatican and English circles ca. 1600 A.D.. That scheme (the only one that I feel is appropriate for our time), which has appeal for christians and non-christians alike, since it embodies many different traditions of human knowledge into a perfect christian solar (and solunar) penitential reform, is simplicity itself, and has been in effect since March 1st. 1980 on a probationary trial period of 36 years! All we HAVE to do (before the trial period granted to us ends, on February 28th. 2016) is to decide, whether we wish to continue having leap years in A.D. years with numbers divisible by 4, except century years not divisible by 400 (the simplest expression of Pope Gregory's rule); or whether instead TO CONTINUE WITH A CYCLIC REPETITION OF THE EIGHT NOMINAL LEAP-YEARS IN THE 33-YEAR TRADITIONAL LIFE OF JESUS! The above statement, in terms of the life of Jesus is an elegant and totally sufficient and determinative definition of the proposed new way to insert leap years, but is not prescriptive in a simple numerical fashion like the Gregorian rule (which requires no math. beyond the 4-times-table up to 100). Fortunately, this years-of-Jesus rule (henceforth referred to as the "Anni- Domini" rule) is exactly equivalent to a mathematical statement which requires no more arithmetic than the 4-times-table up to 32 (and the simple shopper and shopkeeper's, price-totalling and change-making skill, needed for a couple of double-digit additions or subtractions). In mathematical language we can say "February will have 29 days whenever the A.D. year-number, reduced modulo 33, is non-zero and divisible by 4." But, in its simplest layman's formulation, here, finally, is the Anni-Domini Leap-Year Decision Procedure, which will (initially) triple the current accuracy with which the calendar follows the Vernal Equinox.** Start with the Anno Domini year-number to be tested. Get a new year-number by adding the number of centuries in it to the remaining number of years (beyond whole centuries). E.G. for 2012 A.D. add 20 to 12 to get 32 A.D. If possible repeat the first step until it can no longer reduce the year. E.G. for 1996 A.D. add 19 to 96 to get 115 A.D. then repeat and add 1 to 15 to get 16 A.D. If the result is greater than 33 A.D. then subtract 33 or 66 to finish. E.G. for 2016 A.D. add 20 to 16 to get 36 A.D. then subtract 33 from 36 to get 3 A.D. We now have a year-number guaranteed to be between 1 and 33 A.D. inclusive. If it is 4,8,12,16,20,24,28 or 32 A.D. then the tested year is a leap-year. So, 2012 reduces to 32 and thus is a leap-year, and 1996 reduces to 115, then to 16 and thus is a leap-year, but 2016 reduces to 36, then to 3, so is not leap in the Anni-Domini system. One or two, double-digit additions, or, one addition and one subtraction, will suffice for all year-numbers until 3498 A.D. Then, two additions and one subtraction, or three additions, may be necessary (but three additions and one subtraction will not be necessary until 340,099 A.D.). Note that this, current, millenial transition period, in which both the Gregorian and Anni-Domini leap-years are the same (from 1981-2015) is like the similar period around 1600 (1585-1619). This explains how John Dee's friends in parliament (Raleigh's group), were able to submit, in his absence, a concrete proposal for his calendar reform in 1585 and yet not reveal its leap-year rule. They did not have to specify any new leap-year rule until 1620! They just had to specify the number of days to apply as the special julian-discontinuity-correction. Dee's treatise written in 1582, specified an eleven-day correction. But it is followed by a note allowing a ten day correction for the year 1583. This is usually interpreted as Dee capitulating to the Gregorian correction so that e.g. commerce with France would not be affected by differing calendars. But actually, in the Anni-Domini calendar, 1583 should be a leap year not 1584. Thus a ten-day correction in 1583 combined with NO LEAP DAY in 1584 adds up to an 11 day correction for the years 1585-1619. Similarily an eleven day correction if applied in 1582 (when Dee apparently wrote the main body of his treatise) would be synchronous with the Gregorian calendar (with its ten day correction in 1582) only for the period March 1583 to February 28th. 1584. The Anni-Domini cycle required a February 29th. 1583, which would pull Dee's correction back to a temporary synchrony with the Gregorian calendar, just until February 28th. 1584, when, as already stated, the Gregorian calendar's leap day would pull it back, to being one day behind the Anni-Domini calendar, again. This is just to forewarn you that other historians will insist that there is proof, in Dee's own handwriting, that he proposed that England follow the Pope's calendar reform (an otherwise most unlikely assertion, given what I have already narrated to you, about the attitude of his circle to the Catholic League, Pope Gregory himself, and the jesuits). The secrecy around the whole project has confused all historians to date. And note that when I talk about February 1583 or February 1584 I am using the modern convention which begins the year with January; NOT the convention used by many Englishmen of the time which started the year in March. The confusion over which convention Dee used has not helped clarify matters either! The secrecy was required by Dee's anti-Spanish colleagues (who needed time to first break King Phillip's monopoly on the calendrical longitude) and this kept postponing official implementation. Dee left for Germany, in September of 1583, to confer with William "the wise" and feel out the alchemical Emperor Rudolph. Dee was irenic at heart and probably hoped to be able to persuade the Holy Roman Emperor and thus all his Catholic subjects, to desert the corrupt Papal calendar for the perfect Nicene version which he had entrusted to Raleigh's "Atlantical" venture and parliament. In light of the apparently implausible nature of my "conspiracy theory", it is instructive to examine the wonderful double meaning in a little piece of verse (perhaps inspired by Dee's spritely adviser Uriel) which appears to be, as intended for insertion, in that particular reform proposal of Dee's, which planned for a 1583 10-day correction by Queen Elizabeth (I will attempt authentication to confirm Dee's authorship.) ELIZABETH our Empress bright, Who in the yere of eighty three, Thus made the truth to come to light, And civile yere with heaven agree. But eighty foure, the Pattern is Of Christ's birth yere: and so for ay Eche Bissext shall fall little mys, To shew the sun of Christ birth day. Three hundred yeres, shall not remove The sun, one day, from this new match: Nature, no more shall us reprove Her golden tyme, so yll to watch. The second of these three consecutive verses from the piece (as it appears in Robert Poole's web-page essay on Dee's reform, which can be found at http://ihr.sas.ac.uk/ihr/esh/jdee.html) is usually, as here by Poole, interpreted to simply repeat the statement, in the body of the treatise, that the birth year of Jesus is commonly held to be a leap-year ("Bissext") and that 1584 will be like it in this respect and the Sun will be in the same point of the zodiac (on Christmas day or New Years) for Jesus' anniversary. However, from the point of view of the Anni-Domini calendar rule, a whole other meaning springs to light. The 33-year "Pattern" of the rule makes the year 1584 (0 modulo 33) equivalent to 1 B.C. (traditionally Christs birth year) AND 33 A.D. (traditionally the year of the Passion), thus marking the beginning and end of a cyclic repetition of the life of Jesus. "And so for ay", thus, using this "Pattern", we can from now on, place each leap year ("Eche bissext") with almost no error ("shall fall little mys"), repeating the solar behaviour over and over, in a cycle begun at the birth of Christ. Note also that, as Dee was certainly aware, it was not at all clear that the year now known as 1 B.C. really was a leap-year; that is, officially, and as actually observed, in the then Roman, Civil Calendar (when of course it was not known as the year 1 B.C. but as some year of Augustus or his consuls). Dee's young friend, Thomas Harriot, who went to Roanoke and did the survey, with John White in 1585-6, of the area necessary to stake England's claim to the Calendrical meridian (its there on their map!), apparently collected the various theories, about this ambiguity in the way leap years were assigned from 45 B.C. (1 Julius Caesar) to 8 A.D. (53 Julius Caesar). These theories can be found set out in columns, in BM MS ADD. 6788 (Thomas Harriot Mathematical Papers) at folio 499 (Recto), written after Dee and Clavius were both dead. Harriot was apparently concerned abut Christoph. Clavius' and Joseph Justus Scaliger's claim that neither 1 B.C. nor 4 A.D. were "actually" leap years (this theory is labelled as theirs in his column 7). Such claims would tend to undercut the elegance of the "years-of-Jesus" appeal of the Anni-Domini 33-year leap-day cycle. At columns 2 and 3 of his table, Harriot posits unnamed theories which probably represent the two alternatives that our spritely verse holds out. The first alternative (Harriot's column 2; column 1 being the years numbered from the calendars inauguration, by Julius Caesar in 45 B.C.) is the banal unthinking assumption that the Romans had a leap year every four years from Julius Caesar on. This meaning would hold if Dee's calendar reform were not carried out properly after he left for Germany, and 1584 were deemed a leap year in contradiction to the Anni-Domini prescription. The second alternative (Harriot's column 3) however, has 1 B.C. a common year and has the four-year leap-days starting with 4 A.D., precisely in accord with the Anni-Domini rule, and with the hidden meaning of the verse. This second hidden meaning would only become apparent if parliament deemed 1584 a common non-leap year while Dee was to be away in Germany, or if posterity were to see Dee's "plat for the meanes". The third verse is usually held to predict that the accuracy of the new calendar will match the sun's behaviour (every Christmas day?) to an accuracy of one day in three hundred years. This is not much of a claim to accuracy since it implies a possible error of up to 0.0033 days between the real solar year (based on Christmas or New Year?) and the proposed Elizabethan reform's average year (despite the fact that this putative average calendar-year or the procedure for maintaining an average year different from the julian one, is never actually stated anywhere in the extant works credited to Dee!!). If Dee really meant to follow the Gregorian year-length and leap-year rule, then he would have known that the accuracy (for any point in the tropical zodiac) was much better than this apparent 0.0033 days/year claim. On the other hand, in terms of the accuracy of the Anni-Domini leap-year rule, this verse can be claiming that the Vernal Equinox will always occur on the same calendar day for at least three hundred years (at some calendrical prime meridian). This claim is much more in line with what we know of the accuracy of astronomy in Dee's time (both claimed and in retrospect). It is claiming an inaccuracy of no more than one ten thousandth of a day between the (unmentioned) average year of the proposed Elizabethan calendar and the true tropical year (true "Vernal Equinox" flavour, not the modern tropical Newcomb-style year, wrongly characterised as the mean year between V.E.s). In retrospect this claim is uncannily accurate, in that, from 1580 until 1880, for calendar days beginning and ending at midnight local apparent time (the most unambiguous rule available in Dee's time), the Vernal equinox would indeed have occured always on March 21st. at the longitude that Sir Walter tried to plant his city of Raleigh in "Old Virginia" (i.e. White's "50 miles into the main" from Roanoke Island, under the one meridian drawn, on his and Harriot's, map of the area, which also shows its relation to the Bahamas). That is, of course, if the full calendar reform with the ten-day correction in 1583 and NO LEAP DAY in 1584 using the Anni-Domini leap-year rule had been properly implemented! I invite those of you mathematically competent, to do the detailed calculations and also check the history of time-conventions in effect at the dates involved. I personally am left amazed at the prophetic quality of this verse. If the calendrical meridian is calculated based on local mean time rather than apparent time (versions of mean time were in use by some astronomers in Dee's era) then "White's Ralegh longitude" seems good to this day, but if apparent local time is adhered to then "White's lost city of Ralegh" was saved from experiencing March 20th. Vernal Equinoxes (under the 11-day julian-corrected Anni-Domini calendar) in the nick of time by Railway Time-zone time of the 1880s becoming standardised to the mean-time under the meridian which lies exactly 5 hours behind Greenwich (75 degrees of longitude West of Greenwich meridian). "At noon or before on Sunday 18 November 1883 public clocks all over North America were altered to the 'new standard of time agreed upon, first by the railroads, for the sake of the uniformity of their schedules, but since generally adopted by the community through the action of various officials and corporate bodies as an obvious convenience in all social and business matters'" (Derek Howse in "Greenwich Time" 1980, quoting the New York Herald newspaper of that date?). November 1883 is 300 years to the month, from Dee's mysterious "secrett" deadline for implementation of his Calendar Reform, (see Poole http://ihr.sas.ac.uk/ihr/esh/jdee.html note#7 re this "deadline"). As we all probably know, Ralegh's American colonisation attempts failed, the last being John White's "Lost Colony" in 1587. Most everything but the Spanish Armada was forgotten about, in the following year of 1588. Dee's calendar has languished in secret ghostly committee ever since! Although some thought the ghost of Dee exorcised, when the British parliament implemented an eleven-day correction in 1752, we should note, that Queen Elizabeth herself, at the prompting of her new favorite Walter Raleigh, had reassured Dee, in April of 1583, that "Quod defertur non aufertur" (What is deferred, SHALL NOT BE ABORTED!). And mow we can see why Dee insisted, on an eleven day correction in contrast to the Gregorian ten day correction (though his extant treatise diguises the reasons), before leaving, in 1583, to attempt to convert the Holy Roman Emperor to his calendar. With a ten-day correction, his Anni-Domini leap-year cycle would have kept the Vernal Equinox always on the 20th. of March (at longitudes through Ralegh's Virginia), but, with an eleven day correction, the Vernal equinox would have always fallen on 21st. March, the stated Nicene goal of the Gregorian reform! He certainly had a wonderfully seductive proposal for a Holy Roman Emperor who might feel miffed at the Pope, for not allowing the calendar reform pronouncement to emanate from the proper Imperial quarters. Dee could hand the Emperor a proposal even more Catholic and Orthodox than the Pope's! As it turned out, Dee fell foul of Vatican agents in Prague, and may never have felt secure enough to confide in any of Rudolph's experts. Though I doubt that there will be any calendar reform before the turn of the century, I propose that, if and when it is decided, to adopt the Anni- -Domini leap-year rule, then we ought to ritually expunge Pope Gregory XIII from our calendar, by finally applying Dee's extra day of correction. In this way, the Nicene tradition will finally be fulfilled and the Equinox will occur on March 21st.** for as long as Gaia (Earth's rotation) allows. This symbolic changing of the guard could conveniently, and with all solemnity, be achieved with no requirement for any special activity on the actual date affected by the proclamation. The ritual would consist of an act of government(s) naming February 29th 2000AD "Pope Gregory XIII Day" and then immediately thereupon, consigning his name and calendar to perdition by passing the act of discontinuity, which will leap one day ahead of the Gregorian calendar by declaring the year 2000 A.D. to be (or have been) a regular non-leap year! (as were 1800 and 1900 A.D.). In this fashion no planet need be offended ever again by having its day of the week associated with Pope Gregory XIII's leap-days (pardon my levity). After the Vatican recovers from high dudgeon, it will find that its Easter and metonic solunar tables will work readily and more accurately in the new Anni-Domini leap-year system, by the simple expedient of applying their lunar correction every 231 years (e.g. whenever Monday the 29th. February is followed, five years later, by Sunday 29th. February) instead of the current application of lunar corrections at some of the century years. ** The increasing solar accuracy will actually keep the Vernal Equinox within the same twenty-four calendar-hour period, every year for many centuries (probably for millenia) to come. This 24-hour period will be always on the same calendar date (for local, apparent or mean time) at longitudes such as Bermuda (Where Shakespeare's "The Tempest" envisages Dee with his spiritual adviser Uriel, as Prospero and the spirit Ariel, surrendering his magic staff into the earth and his book into the water). If, however, we consider calendar dates to be CIVILLY separated, by the stroke of midnight ZONE-TIME, then the Eastern Standard Time-zone (5 hours behind Greenwich) is still currently the place where the Vernal Equinox can stay on the same date, but a new time-zone (4 and 1/2 hours behind Greenwich) may succeed it, in some future century. (Math. based on Meeus '83 & '91). Yrs, Simon Cassidy, 1053 47th.St. Emeryville Ca.94608, .ph.510-547-0684.
{"url":"https://www.hermetic.ch/cal_stud/cassidy/33yr-cal.htm","timestamp":"2024-11-13T08:07:02Z","content_type":"text/html","content_length":"21220","record_id":"<urn:uuid:0e3f6f5c-7f78-4a84-ba18-af6f0f172446>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00398.warc.gz"}
Annex X.12 – INTERNAL, EXTERNAL, LOCAL AND GLOBAL STATES In chapter 4, we have seen what are the definitions of an object’s state or of an even process, as well as the types of the processual states which are coming from this kind of definition. Based on these definitions, and on the fact that the assessment of each state type is made against a reference system, and on the fact that the object whose state is assessed may be a complex object, there are also other state classes which can be defined. When we have discussed about objects, we saw that their properties are determined first of all against an internal reference system and in this case, we are dealing with some internal properties. All the properties of an object are also determined against a reference which is outside the object and that is why they are called external properties. Let us remember the definition 3.1.3 which was given to the notion of object: The object is a finite and invariant set of qualitative attributes (properties), with simultaneous, finite and invariant distributions, on the same finite and invariant support domain, which are determined against a common internal reference system. If the set of the attributes of an object is made-up from m properties, each element x[k] of the common support will be associated with m values of the distributed attributes which are related to that particular element by means of m assignment relations. According to the definition 4.2.1, all the existing (distributed) invariant attributes on an element x[k] of the common support, makes-up the abstract state object at the value x[k] of that particular support. In chapter 2, we saw that the support element may be a singular value, that is a case when we are dealing with a primary distribution, or an elementary interval of values (with an internal reference x[k]), and in this case we are dealing with a derived distribution (of a primary distribution). In chapter 4, we saw that the state applicable to a singular value of the support is a state S[0] (state of a primary distribution element), and the one which is related to a support finite interval is a state S[n], where n is the rank of the finite difference distributed on the elementary interval n). Because any of the above-mentioned states, either S[0] or S[n] represents a set of properties belonging to a certain element of the support attribute, all of these will be considered as local states (specific either to the support element x[k], or to the elementary interval with an internal reference at x[k]) of the object with the above mentioned m properties. The evaluation of the value of the local states attributes can be done, as we have previously pointed out, against a reference system inside the object, when we shall be dealing with internal (local) states, or against an external reference system, when we shall be dealing with external states (local as well). We were previously saying that the local states are states specific to a certain distribution element, either primary or derived distribution which belong to an object. The m distributions which belong to an object Ob with m qualitative properties in set, have a finite number of elements (for the realizable distributions): the number of normal singular values corresponding to the primary distributions, or the number of elementary intervals in which the support is divided, concerning the derived distributions. In chapter 3, we saw that the elements of a distribution are elementary objects at the same time, therefore, the object Ob is an object composed from a set of elementary objects, each with its own m properties which are provided by means of the assignment relations. Since all the properties of an elementary object are specific (local) properties, they all have a common component, aspect which was presented in chapter 3, the reference value against which these properties are evaluated, that is a value which belongs to the internal reference system of the object Ob. We have also noticed in chapter 3 that this reference value valid for an isolated object has a null value (absolute reference), and for an object which deploys relations with other external objects, its value shall be established against an external reference, common to all the objects which develop mutual relations, and it becomes a relative reference. In this case, the set of objects which deploy external relations makes-up a complex object, the composition relations being created between the internal reference systems of each constitutive object, and as a result of the existence of such relations, each internal reference shall be assigned with a non-zero value. But, this means that there is a set of dependence relations deployed between the values of the internal references of the constitutive objects and the external reference, set which will make-up a new distribution, which represents the complex object. The total amount of properties assigned to the internal RS of a complex object against the external reference makes-up an external state of this RS, and because that state is common to all the internal elements of the complex object, it will be a global state of this object. As a conclusion, an amount which is placed inside an invariant confined surface may be characterized from two points of view - local and global. The local characterization is given by the elements of the spatial distribution of that amount inside the surface (mostly by their density), and the global one is given by the integral of this distribution (the total attribute amount distributed into the inner volume, that is the attribute stockpile), or by the internal RS of the distribution. As for the distributed processes, the local characterization is made by SEP (the element of Euler distribution), and the global one is given by the resultant of the vectors’ distribution (which is also the result of an integration). Copyright © 2006-2011 Aurel Rusu. All rights reserved.
{"url":"https://objectual-philosophy.com/book/annex/x12.html","timestamp":"2024-11-12T02:31:32Z","content_type":"text/html","content_length":"12880","record_id":"<urn:uuid:2a6aceac-2dff-4e3e-b20f-eb18386f9601>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00694.warc.gz"}
Waves of intermediate length through an array of vertical cylinders We report a semi-analytical theory of wave propagation through a vegetated water. Our aim is to construct a mathematical model for waves propagating through a lattice-like array of vertical cylinders, where the macro-scale variation of waves is derived from the dynamics in the micro-scale cells. Assuming infinitesimal waves, periodic lattice configuration, and strong contrast between the lattice spacing and the typical wavelength, the perturbation theory of homogenization (multiple scales) is used to derive the effective equations governing the macro-scale wave dynamics. The constitutive coefficients are computed from the solution of micro-scale boundary-value problem for a finite number of unit cells. Eddy viscosity in a unit cell is determined by balancing the time-averaged rate of dissipation and the rate of work done by wave force on the forest at a finite number of macro stations. While the spirit is similar to RANS scheme, less computational effort is needed. Using one fitting parameter, the theory is used to simulate three existing experiments with encouraging results. Limitations of the present theory are also pointed out. All Science Journal Classification (ASJC) codes 深入研究「Waves of intermediate length through an array of vertical cylinders」主題。共同形成了獨特的指紋。
{"url":"https://researchoutput.ncku.edu.tw/zh/publications/waves-of-intermediate-length-through-an-array-of-vertical-cylinde","timestamp":"2024-11-10T13:52:45Z","content_type":"text/html","content_length":"56920","record_id":"<urn:uuid:8b71f3a2-6a25-4730-bc59-67c4fe58a96f>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00571.warc.gz"}
Binary Addition: A Fundamental Building Block of Digital ComputingBinary Addition: A Fundamental Building Block of Digital Computing - Manchesterjournal Binary Addition: A Fundamental Building Block of Digital Computing Binary addition, the cornerstone of digital logic, is a simple yet powerful operation that forms the basis of modern computing. It involves the addition of two binary numbers, where each digit is either a 0 or a 1. While the concept may seem straightforward, its implications are far-reaching, influencing everything from the simplest calculators to the most complex supercomputers. Understanding Binary Numbers Before delving into binary addition, it’s essential to grasp the concept of binary numbers. Unlike the decimal system we use in everyday life, which has 10 digits (0-9), the binary system uses only two digits: 0 and 1. Each digit in a binary number is called a bit. • Bit: A binary digit, representing either 0 or 1. • Byte: A group of 8 bits. Binary numbers are interpreted differently than decimal numbers. For instance, the binary number 101 represents the decimal number 5. The rightmost bit (least significant bit) has a value of 2^0 (1), the next bit has a value of 2^1 (2), and the leftmost bit has a value of 2^2 (4). Therefore, 14 + 02 + 1*1 = 5. The Process of Binary Addition Add the least significant bits: Start with the rightmost bits of the two numbers. Carry over: If the sum of the bits is greater than 1, carry over the 1 to the next position. Repeat: Continue adding the bits from right to left, carrying over as needed. + 1101 In this example, the sum of the least significant bits is 1 + 1 = 2, which is greater than 1. So, we carry over the 1 to the next position. The sum of the second bits is 1 + 0 + 1 (carried over) = 2, again resulting in a carry over. The final result is 10000 in binary, which is equivalent to the decimal number 16. Applications of Binary Addition Binary addition is a fundamental operation in digital circuits and has numerous applications: • Arithmetic Logic Units (ALUs): ALUs are the core components of CPUs that perform various arithmetic and logical operations, including addition. • Digital Counters: Counters are used to keep track of events or quantities. They often employ binary addition to increment their values. • Digital Logic Gates: Logic gates, the building blocks of digital circuits, use binary addition to perform logical operations like AND, OR, and XOR. • Error Detection and Correction: Binary addition is used in error detection and correction codes to identify and correct errors in data transmission. • Cryptography: Encryption algorithms often rely on binary addition for various operations, such as modular arithmetic and bitwise operations. Beyond Basic Addition: Advanced Concepts While basic binary addition is essential, digital circuits often require more complex operations. Some of these include: • Subtraction: Subtraction can be performed using a technique called two’s complement, which involves negating a number and adding it to another. • Multiplication: Multiplication can be implemented using repeated addition or specialized algorithms like the Booth algorithm. • Division: Division is a more complex operation that involves repeated subtraction and shifting. Binary addition, a seemingly simple concept, plays a crucial role in the functioning of modern computers. It forms the foundation for various digital operations, from basic arithmetic to complex algorithms. By understanding binary addition, we gain a deeper appreciation for the underlying principles of digital computing and the intricate mechanisms that power our technological world. What is Binary Addition? Binary addition is the process of adding two binary numbers together. It’s a fundamental operation in digital electronics and computer science. Just like regular addition, it involves carrying over digits (bits) to the next position when the sum exceeds the base (2 in the case of binary). How Does Binary Addition Work? Binary addition follows the same rules as decimal addition: Start from the rightmost digit (least significant bit). Add the corresponding digits from both numbers. If the sum is less than 2, write the sum in the result. If the sum is 2 or greater, write 0 in the result and carry over 1 to the next position. Repeat steps 2-4 for each digit, moving leftward. What is the Carry Over Rule in Binary Addition? The carry over rule in binary addition states that if the sum of two bits is 2 or greater, a 1 is carried over to the next position. This is similar to how we carry over in decimal addition when the sum exceeds 9. How to Perform Binary Addition with Examples? Here are some examples of binary addition: • 101 + 110 = 1011 • 111 + 101 = 1100 • 1001 + 11 = 1010 What is the Truth Table for Binary Addition? A truth table is a table that shows all possible input combinations and their corresponding outputs for a logical operation. Here’s the truth table for binary addition: Input A Input B Sum Carry How is Binary Addition Used in Computers? Binary addition is a fundamental operation in digital circuits and computers. It’s used in various components, including: • Arithmetic Logic Units (ALUs): ALUs perform arithmetic operations like addition, subtraction, multiplication, and division. • Adders: Dedicated circuits designed to perform binary addition efficiently. • Registers: Storage units that hold binary data for processing. • Control Units: Circuits that coordinate the execution of instructions, often using binary addition to calculate addresses and timing. What are the Different Types of Adders? There are several types of adders used in digital circuits, each with its own characteristics: • Half-adder: Adds two bits and produces a sum and a carry. • Full-adder: Adds three bits (two inputs and a carry-in) and produces a sum and a carry-out. • Ripple-carry adder: A chain of full-adders connected in series, where the carry-out of one adder is the carry-in to the next. • Carry-lookahead adder: A faster type of adder that uses logic to predict carries in advance, reducing propagation delay. Can You Perform Binary Subtraction Using Addition? Yes, binary subtraction can be performed using addition by using the two’s complement method. In this method, the subtrahend (the number being subtracted) is converted to its two’s complement and then added to the minuend (the number being subtracted from). The result is the difference between the two numbers. What is the Significance of Binary Addition in Computer Science? Binary addition is a crucial concept in computer science due to its fundamental role in digital circuits and computer operations. It’s used in various applications, including: • Number representation and manipulation: Binary numbers are the foundation of computer arithmetic and data storage. • Logic circuits: Binary addition is used to implement logical operations like AND, OR, and XOR. • Control flow: Binary addition is used to calculate addresses and timing for instruction execution. • Error detection and correction: Binary addition is used in error detection and correction codes to ensure data integrity. To read more, Click here
{"url":"https://manchesterjournal.co.uk/binary-addition/","timestamp":"2024-11-11T08:08:35Z","content_type":"text/html","content_length":"68983","record_id":"<urn:uuid:980e3dc9-9415-4c92-ac6f-fc229bd67fa3>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00581.warc.gz"}
Robert Myers Robert Myers is one of the leading theoretical physicists working in the area of quantum fields and strings. He received his Ph.D. at Princeton University in 1986, after which he was a postdoctoral researcher at what became the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara. He moved to McGill University in 1989, where he was a Professor of Physics until moving to Perimeter Institute and the University of Waterloo in the summer of 2001. Professor Myers was awarded the Herzberg Medal in 1999 by the Canadian Association of Physicists for seminal contributions to our understanding of black hole microphysics and D-branes. He is also the 2005 winner of Canada's top prize in theoretical and mathematical physics awarded by the Canadian Association of Physicists and the Centre de Recherches Mathématiques . More recently, he was awarded the 2012 Vogt Medal by the Canadian Association of Physicists and TRIUMF for outstanding theoretical contributions to subatomic physics. In 2006, he was elected a Fellow of the Royal Society of Canada . He is one of the few people to have won the first-place award in the Gravity Research Foundation Essay Contest more than once (winning in 1995 and 1997). Past winners of this contest, which was established for the purpose of stimulating thought and encouraging work on gravitation, include Stephen Hawking and Roger Penrose. Professor Myers was named as one of only three Canadian physicists on the list of the World's Most Influential Scientific Minds 2014 , the Thomson Reuters list of top 1% of researchers who wrote most cited papers in their field over the period 2002 to 2012. In fact, he appeared again as the only Canadian physicist in the next edition of this list, the World's Most Influential Scientific Minds 2015 , covering the period 2003 to 2013. Professor Myers is also an associate fellow of the Cosmology and Gravity Program of the Canadian Institute For Advanced Research , a uniquely Canadian enterprise devoted to networking top-flight researchers from across the country. From 2001 to 2005, he was a founding member on the scientific advisory board of the Banff International Research Station , a facility devoted to hosting workshops and meetings in the mathematical sciences and related areas. Professor Myers has also served on the editorial boards of the following research journals: Annals of Physics (January 2002 - July 2012) and Journal of High Energy Physics (May 2007 - present). Though his current activities are centered at Perimeter Institute, Professor Myers remains active in both teaching and supervising graduate students with his cross-appointment as an Adjunct Professor in the Physics Department at the University of Waterloo.
{"url":"https://scivideos.org/speaker/Robert-Myers","timestamp":"2024-11-15T04:26:44Z","content_type":"text/html","content_length":"54965","record_id":"<urn:uuid:f5adc26e-f613-4b6e-94ee-61dd8990a975>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00779.warc.gz"}
Twin chute-swapping Sudoku A pair of Sudokus with lots in common. In fact they are the same problem but rearranged. Can you find how they relate to solve them both? By Henry Kwok Twin A Twin B Rules of Twin Chute-Swapping Sudokus This Sudoku consists of a pair of linked standard Sudoku puzzles each with some starting digits. As usual, the object of this Sudoku is to fill in the whole of each 9x9 grid with digits 1 through 9 so that each row, each column and each block contain all the digits 1 through 9. Twin B is related to twin A in the following ways: Given that twin A is the original puzzle, twin B is obtained by swapping a horizontal/vertical chute or band of blocks with another horizontal/vertical chute or band of blocks. By such transformation, twin A and twin B are essentially the same or equivalent Sudoku puzzle. For example, fig 2 is created from fig 1 by shifting the first chute of 3 blocks sideways to the right and the second chute of 3 blocks sideways to the left. Similarly an equivalent puzzle (fig 3) can be created from fig 1 by shifting the first chute of 3 blocks downwards and the second chute of 3 blocks upwards. The three puzzles are equivalent to one another. Equivalent puzzles can also be created from fig 2 and fig 3 by similar transformations. Student Solutions Twin A solution Twin B solution
{"url":"https://nrich.maths.org/problems/twin-chute-swapping-sudoku","timestamp":"2024-11-15T00:56:34Z","content_type":"text/html","content_length":"41627","record_id":"<urn:uuid:97b3aec7-22a1-40fa-91eb-d71ff7e135a7>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00143.warc.gz"}
Help for Counting Connected Yellow Patch Clusters in a Termite Model Hello everyone, I am working on a termite model in NetLogo and I am having difficulties correctly counting the connected clusters of yellow patches. What I want to achieve is to detect how many clusters (groups of connected yellow patches) there are in the world and plot that number as the simulation progresses. So far, my code works, but what it is counting is simply the total number of yellow patches, not the connected clusters. After doing some research, I have tried to use a flood-fill function-based approach to identify the clusters, but I still can’t get it to work as it should. Here is the code I have developed so far: turtles-own [next-task steps] patches-own [visited?] ; To keep track of patches that have already been visited to setup set-default-shape turtles "bug" ; Initialize the graph for the number of clusters set-current-plot "Clusters" set-plot-x-range 0 100 set-plot-y-range 0 50 create-temporary-plot-pen "Clusters" ask patches [ set visited? false ; Initialize all patches as not visited if random-float 100 < density [set pcolor yellow] create-turtles number [ set color white setxy random-xcor random-ycor set steps 0 set next-task "search-for-chip" to go ask turtles [ ifelse steps > 0 [ set steps steps - 1 ] [ if next-task = "search-for-chip" [ search-for-chip ] if next-task = "find-new-pile" [ find-new-pile ] if next-task = "put-down-chip" [ put-down-chip ] if next-task = "get-away" [ get-away ] detect-clusters ; Detect and plot clusters to wiggle forward 1 right random 20 left random 20 to search-for-chip if pcolor = yellow [ set pcolor black set color orange set steps 20 set next-task "find-new-pile" to find-new-pile if pcolor = yellow [ set next-task "put-down-chip" to put-down-chip if pcolor = black [ set pcolor yellow set color white set steps 20 set next-task "get-away" to get-away if pcolor = black [ set next-task "search-for-chip" ; Function to detect and count clusters of yellow patches to detect-clusters ask patches [ set visited? false ] ; Reset the visited status for all patches let num-clusters 0 ; Variable to store the number of clusters ; Search for unvisited yellow patch clusters ask patches with [pcolor = yellow and not visited?] [ ; If we find an unvisited yellow patch, flood-fill (visit) the entire cluster flood-fill self set num-clusters num-clusters + 1 ; Increment the number of clusters only once per cluster ; Plot the number of detected clusters set-current-plot-pen "Clusters" plot num-clusters ; Recursive search to flood-fill the connected cluster of yellow patches to flood-fill [p] ; 'p' is the current patch ask p [ set visited? true ; Mark this patch as visited ask neighbors4 with [pcolor = yellow and not visited?] [ flood-fill self ; Recursively call to flood-fill the rest of the cluster What I want: To detect how many clusters of connected yellow patches there are and plot that number in real time. What I get: Instead of counting the clusters, the model is counting the total number of yellow patches. Is there any mistake in how I am implementing the flood-fill algorithm to detect clusters? Any suggestions or improvements would be greatly appreciated. Maybe try this: ; Search for unvisited yellow patch clusters ask patches with [pcolor = yellow] [ if not visited? [ ; If we find an unvisited yellow patch, flood-fill (visit) the entire cluster flood-fill self set num-clusters num-clusters + 1 ; Increment the number of clusters only once per cluster 1 Like
{"url":"https://forum.netlogo.org/t/help-for-counting-connected-yellow-patch-clusters-in-a-termite-model/375","timestamp":"2024-11-10T01:49:54Z","content_type":"text/html","content_length":"22062","record_id":"<urn:uuid:96df6922-2695-45a7-852c-e411c0a47e4e>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00053.warc.gz"}
Probably Overthinking It Here’s another installment in Data Q&A: Answering the real questions with Python. Previous installments are available from the Data Q&A landing page. Sample Size Selection¶ Here’s a question from the Reddit statistics forum. Hi Redditors, I am a civil engineer trying to solve a statistical problem for a current project I have. I have a pavement parking lot 125,000 sf in size. I performed nondestructive testing to render an opinion about the areas experiencing internal delimitation not observable from the surface. Based on preliminary testing, it was determined that 9% of the area is bad, and 11% of the total area I am unsure about (nonconclusive results if bad or good), and 80% of the area is good. I need to verify all areas using destructive testing, I will take out slabs 2 sf in size. my question is how many samples do I need to take from each area to confirm the results with 95% confidence interval? There are elements of this question that are not clear, and OP did not respond to follow-up questions. But the question is generally about sample size selection, so let’s talk about that. If the parking lot is 125,000 sf and each sample is 2 sf, we can imagine dividing the total area into 62,500 test patches. Of those, some unknown proportion are good and the rest are bad. In reality, there is probably some spatial correlation — if a patch is bad, the nearby patches are more likely to be bad. But if we choose a sample of patches entirely at random, we can assume that they are independent. In that case, we can estimate the proportion of patches that are good and quantify the precision of that estimate by computing a confidence interval. Then we can choose a sample size that meets some requirement. For example, we might want the 95% confidence interval needs to be smaller than a given threshold, or we might want to bound the probability that the proportion falls below some threshold. But let’s start by estimating proportions and computing confidence intervals. I’ll download a utilities module with some of my frequently-used functions, and then import the usual libraries. In [1]: from os.path import basename, exists def download(url): filename = basename(url) if not exists(filename): from urllib.request import urlretrieve local, _ = urlretrieve(url, filename) print("Downloaded " + str(local)) return filename import numpy as np import matplotlib.pyplot as plt import pandas as pd import seaborn as sns from utils import decorate In [2]: # install the empiricaldist library, if necessary import empiricaldist except ImportError: !pip install empiricaldist The beta-binomial model¶ Based on preliminary testing, it is likely that the proportion of good patches is between 80% and 90%. We can take advantage of that information by using a beta distribution as a prior and updating it with the data. Here’s a prior distribution that seems like a reasonable choice, given the background information. In [3]: from scipy.stats import beta as beta_dist prior = beta_dist(8, 2) The prior mean is at the low end of the likely range, so the results will be a little conservative. Here’s what the prior looks like. In [4]: def plot_dist(dist, **options): qs = np.linspace(0, 1, 101) ps = dist.pdf(qs) ps /= ps.sum() plt.plot(qs, ps, **options) In [5]: plot_dist(prior, color='gray', label='prior') decorate(xlabel='Proportion good', This prior leaves open the possibility of values below 80% and greater than 90%, but it assigns them lower probabilities. Now let’s generate a hypothetical dataset to see what the update looks like. Suppose the actual percentage of good patches is 90%, and we sample n=10 of them. In [6]: def generate_data(n, p): yes = int(round(n * p)) no = n - yes return yes, no And suppose that, in line with expectations, 9 out of 10 tests are good. In [7]: yes, no = generate_data(n=10, p=0.9) yes, no Under the beta-binomial model, computing the posterior is easy. In [8]: def update(dist, yes, no): a, b = dist.args return beta_dist(a + yes, b + no) Here’s how we run the update. In [9]: posterior10 = update(prior, yes, no) The posterior mean is 85%, which is half way between the prior mean and the proportion observed in the data. Here’s what the posterior distribution looks like, compared to the prior. In [10]: plot_dist(prior, color='gray', label='prior') plot_dist(posterior10, label='posterior10') decorate(xlabel='Proportion good', Given the posterior distribution, we can use ppf, which computes the inverse CDF, to compute a confidence interval. In [11]: def confidence_interval(dist, percent=95): low = (100 - percent) / 200 high = 1 - low ci = dist.ppf([low, high]) return ci Here’s the result for this example. In [12]: array([0.66862334, 0.96617375]) With a sample size of only 10, the confidence interval is still quite wide — that is, the estimate of the proportion is not precise. In [13]: yes, no = generate_data(n=100, p=0.9) posterior100 = update(prior, yes, no) With a larger sample size, the posterior mean is closer to the proportion observed in the data. And the posterior distribution is narrower, which indicates greater precision. In [14]: plot_dist(prior, color='gray', label='prior') plot_dist(posterior10, label='posterior10') plot_dist(posterior100, label='posterior100') decorate(xlabel='Proportion good', The confidence interval is much smaller. In [15]: array([0.82660267, 0.94180387]) If we need more precision than that, we can increase the sample size more. If we don’t need that much precision, we can decrease it. With some math, we could compute the sample size algorithmically, but a simple alternative is to run this analysis with different sample sizes until we get the results we need. But what about that prior?¶ Some people don’t like using Bayesian methods because they think it is more objective to ignore perfectly good background information, even in cases like this where they come from preliminary testing that is clearly applicable. To satisfy them, we can run the analysis again with a uniform prior, which is not actually more objective, but it seems to make people happy. In [16]: uniform_prior = beta_dist(1, 1) The mean of the uniform prior is 50%, so it is more pessimistic. Here’s the update with n=10. In [17]: yes, no = generate_data(n=10, p=0.9) uniform_posterior10 = update(uniform_prior, yes, no) Now let’s compare the posterior distributions with the informative prior and the uniform prior. In [18]: plot_dist(uniform_prior, color='gray', label='uniform prior') plot_dist(posterior10, color='C1', label='posterior10') plot_dist(uniform_posterior10, color='C4', label='uniform posterior10') decorate(xlabel='Proportion good', With the informative prior, the posterior distribution is a little narrower — an estimate that uses background information is more precise. Let’s do the same thing with n=100. In [19]: uniform_prior = beta_dist(1, 1) yes, no = generate_data(n=100, p=0.9) uniform_posterior100 = update(uniform_prior, yes, no) In [20]: plot_dist(uniform_prior, color='gray', label='uniform prior') plot_dist(posterior100, color='C1', label='posterior100') plot_dist(uniform_posterior100, color='C4', label='uniform posterior100') decorate(xlabel='Proportion good', With a larger sample size, the choice of the prior has less effect — the posterior distributions are almost the same. Sample size analysis is a good thing to do when you are designing experiments, because it requires you to • Make a model of the data-generating process, • Generate hypothetical data, and • Specify ahead of time what analysis you plan to do. It also gives you a preview of what the results might look like, so you can think about the requirements. If you do these things before running an experiment, you are likely to clarify your thinking, communicate better, and improve the data collection process and analysis. Sample size analysis can also help you choose a sample size, but most of the time that’s determined by practical considerations, anyway. I mean, how many holes do you think they’ll let you put in that parking lot? The mean of a Likert scale? The mean of a Likert scale? Here’s another installment in Data Q&A: Answering the real questions with Python. Previous installments are available from the Data Q&A landing page. Likert scale analysis¶ Here’s a question from the Reddit statistics forum. I have collected data regarding how individuals feel about a particular program. They reported their feelings on a scale of 1-5, with 1 being Strongly Disagree, 2 being Disagree, 3 being Neutral, 4 being Agree, and 5 being Strongly Agree. I am looking to analyze the data for averages responses, but I see that a basic mean will not do the trick. I am looking for very simple statistical analysis on the data. Could someone help out regarding what I would do? It sounds like OP has heard the advice that you should not compute the mean of values on a Likert scale. The Likert scale is ordinal, which means that the values are ordered, but it is not an interval scale, because the distances between successive points are not necessarily equal. For example, if we imagine that “Neutral” maps to 0 on a number line and “Agree” maps to 1, it’s not clear where we should place “Strongly agree”. And an appropriate mapping might not be symmetric — for example, maybe the people who choose “Strongly agree” are content, but the people who choose “Strongly disagree” are angry. In an arithmetic mean, they would cancel each other out — but that might erase an important distinction. Nevertheless, I think an absolute prohibition on computing means is too strong. I’ll show some examples where I think it’s a reasonable thing to do — but I’ll also suggest alternatives that might be I’ll download a utilities module with some of my frequently-used functions, and then import the usual libraries. In [1]: from os.path import basename, exists def download(url): filename = basename(url) if not exists(filename): from urllib.request import urlretrieve local, _ = urlretrieve(url, filename) print("Downloaded " + str(local)) return filename import numpy as np import matplotlib.pyplot as plt import pandas as pd import seaborn as sns from utils import decorate In [2]: # install the empiricaldist library, if necessary import empiricaldist except ImportError: !pip install empiricaldist Political views¶ As an example, I’ll use data from the General Social Survey (GSS), which I have resampled to correct for stratified sampling. In [3]: In [4]: gss = pd.read_hdf('gss_qna_extract.hdf', 'gss') The variable we’ll start with is polviews, which contains responses to this question: We hear a lot of talk these days about liberals and conservatives. I’m going to show you a seven-point scale on which the political views that people might hold are arranged from extremely liberal–point 1–to extremely conservative–point 7. Where would you place yourself on this scale? This is not a Likert scale, specifically, but it is ordinal. Here is the distribution of responses: In [5]: from utils import values 1.0 2095 2.0 7309 3.0 7799 4.0 24157 5.0 9816 6.0 9612 7.0 2145 NaN 9457 Name: count, dtype: int64 Here’s the mapping from numerical values to the options that were shown to respondents. In [6]: polviews_dict = { 1: "Extremely liberal", 2: "Liberal", 3: "Slightly liberal", 4: "Moderate", 5: "Slightly conservative", 6: "Conservative", 7: "Extremely conservative", As always, it’s a good idea to visualize the distribution before we compute any summary statistics. I’ll use Pmf from empiricaldist to compute the PMF of the values. In [7]: from empiricaldist import Pmf pmf_polviews = Pmf.from_seq(gss['polviews']) Here’s what it looks like. In [8]: labels = list(polviews_dict.values()) plt.barh(labels, pmf_polviews) The modal value is “Moderate” and the distribution is roughly symmetric, so I think it’s reasonable to compute a mean. For example, suppose we want to know how self-reported political alignment has changed over time. We can compute the mean in each year like this: In [9]: mean_series = gss.groupby('year')['polviews'].mean() And plot it like this. In [10]: from utils import plot_series_lowess plot_series_lowess(mean_series, 'C2') title='Mean political alignment') I used LOWESS to plot a local regression line, which makes long-term trends easier to see. It looks like the center of mass trended toward conservative from the 1970s into the 1990s and trended toward liberal since then. When you compute any summary statistic, you lose information. To see what we might be missing, we can use a normalized cross-tabulation to compute the distribution of responses in each year. In [11]: xtab = pd.crosstab(gss['year'], gss['polviews'], normalize='index') │polviews│ 1.0 │ 2.0 │ 3.0 │ 4.0 │ 5.0 │ 6.0 │ 7.0 │ │ year │ │ │ │ │ │ │ │ │ 1974 │0.021908│0.142049│0.149117│0.380212│0.157597│0.127915│0.021201│ │ 1975 │0.040057│0.131617│0.148069│0.386266│0.145923│0.115880│0.032189│ │ 1976 │0.021877│0.139732│0.123500│0.398024│0.147495│0.145378│0.023994│ │ 1977 │0.025085│0.122712│0.145085│0.402712│0.164746│0.111186│0.028475│ │ 1978 │0.014463│0.096419│0.175620│0.384986│0.182507│0.128788│0.017218│ And we can use a heat map to visualize the results. In [12]: sns.heatmap(xtab.T, cmap='cividis_r') Based on the heat map, it seems like the general shape of the distribution has not changed much — so the mean is probably a good way to make comparisons over time. However, it is hard to interpret the mean in absolute terms. For example, in the most recent data, the mean is about 4.1. In [13]: gss.query('year == 2022')['polviews'].mean() Since 4.0 maps to “Moderate”, we can say that the center of mass is slightly on the conservative side of moderate, but it’s hard to say what a difference of 0.1 means on this scale. As an alternative, we could add up the percentage who identify as conservative or liberal, with or without an adverb. In [14]: con = xtab[[5, 6, 7]].sum(axis=1) * 100 lib = xtab[[1, 2, 3]].sum(axis=1) * 100 And plot those percentages over time. In [15]: plot_series_lowess(con, 'C3', label='Conservative') plot_series_lowess(lib, 'C0', label='Liberal') title='Percent identifying as conservative or liberal') Or we could plot the difference in percentage points. In [16]: diff = con - lib plot_series_lowess(diff, 'C4') ylabel='Percentage points', title='Difference %conservative - %liberal') This figure shows the same trends we saw by plotting the mean, but the y-axis is more interpretable — for example, we could report that, at the peak of the Reagan era, conservatives outnumbered liberals by 10-15 percentage points. Standard deviation¶ Suppose we are interested in polarization, so we want to see if the spread of the distribution has changed over time. Would it be OK to compute the standard deviation of the responses? As with the mean, my answer is yes and no. First, let’s see what it looks like. In [17]: std_series = gss.groupby('year')['polviews'].std() In [18]: plot_series_lowess(std_series, 'C3') ylabel='Standard deviation') The standard deviation is easy to compute, and it makes it easy to see the long-term trend. If we interpret the spread of the distribution as a measure of polarization, it looks like it has increased in the last 30 years. But it is not easy to interpret this result in context. If it increased from about 1.35 to 1.5, is that a lot? It’s hard to say. As an alternative, let’s compute the mean absolute deviation (MAD), which we can think of like this: if we choose two people at random, how much will they differ on this scale, on average? A quick way to estimate MAD is to draw two samples from the responses and compute the mean pairwise distance. In [19]: def sample_mad(series, size=1000): data = series.dropna() if len(data) == 0: return np.nan sample1 = np.random.choice(data, size=size, replace=True) sample2 = np.random.choice(data, size=size, replace=True) mad = np.abs(sample1 - sample2).mean() return mad In [20]: The result is about 1.5 points, which is bigger than the distance from moderate to slightly conservative, and smaller than the distance from moderate to conservative. Rather than sampling, we can compute MAD deterministically by forming the joint distribution of response pairs and computing the expected value of the distances. For this computation, it is convenient to use NumPy functions for outer product and outer difference. In [21]: def outer_mad(series): pmf = Pmf.from_seq(series) if len(pmf) == 0: return np.nan ps = np.outer(pmf, pmf) qs = np.abs(np.subtract.outer(pmf.index, pmf.index)) return np.sum(ps * qs) Again, the result is about 1.5 points. In [22]: Now we can see how this value has changed over time. In [23]: mad_series = gss.groupby('year')['polviews'].apply(outer_mad) Here’s the result, along with the standard deviation. In [24]: plt.figure(figsize=(6, 6)) plt.subplot(2, 1, 1) plot_series_lowess(std_series, 'C3') ylabel='Standard deviation') plt.subplot(2, 1, 2) plot_series_lowess(mad_series, 'C4') ylabel='Mean absolute difference') The two figures tell the same story — polarization is increasing. But the MAD is easier to interpret. In the 1970s, if you chose two people at random, they would differ by less than 1.5 points on average. Now the difference would be almost 1.7 points. Considering that the difference between a moderate and a conservative is 2 points, it seems like we should still be able to get along. I think MAD is more interpretable than standard deviation, but it is based on the same assumption that the points on the scale are equally spaced. In most cases, that’s not an assumption we can easily check, but in this example, maybe we can. For another project, I selected 15 questions in the GSS where conservatives and liberals are most likely to disagree, and used them to estimate the number of conservative responses from each respondent. The following figure shows the average number of conservative responses to the 15 questions for each point on the self-reported scale. In [25]: from utils import xticks conservatism = gss.groupby('polviews')['conservatism'].mean() xticks(polviews_dict, rotation=30) decorate(xlabel='Political alignment', ylabel='Conservative responses') The result is close to a straight line, which suggests that the assumption of equal spacing is not bad in this case. When is the mean bad?¶ In the examples so far, computing the mean and standard deviation of a scale variable is not necessarily the best choice, but it could be a reasonable choice. Now we’ll see an example where it is probably a bad choice. The variable homosex contains responses to this question: What about sexual relations between two adults of the same sex–do you think it is always wrong, almost always wrong, wrong only sometimes, or not wrong at all? If the wording of the question seems loaded, remember that many of the core questions in the GSS were written in the 1970s. Here is the encoding of the responses. In [26]: homosex_dict = { 1: "Always wrong", 2: "Almost always wrong", 3: "Sometimes wrong", 4: "Not wrong at all", 5: "Other", And here are the value counts. In [27]: 1.0 24856 2.0 1857 3.0 2909 4.0 12956 5.0 94 NaN 29718 Name: count, dtype: int64 Before we do anything else, let’s look at the distribution of responses. In [28]: pmf_homosex = Pmf.from_seq(gss['homosex']) In [29]: labels = list(homosex_dict.values()) plt.barh(labels, pmf_homosex) There are several reasons it’s a bad idea to summarize this distribution by computing the mean. First, one of the responses is not ordered. If we include “Other” in the mean, the result is In [30]: gss['homosex'].mean() # total nonsense If we exclude “Other”, the remaining responses are ordered, but arguably not evenly spaced on a spectrum of opinion. In [31]: gss['homosex'].replace(5, np.nan).mean() # still nonsense If we compute a mean and report that the average response is somewhere between “Sometimes wrong” and “Almost always wrong”, that is not an effective summary of the distribution. The distribution of results is strongly bimodal — most people are either accepting of homosexuality or not. And that suggests a better way to summarize the distribution: we can simply report the fraction of respondents who choose one extreme or the other. I’ll start by creating a binary variable that is 1 for respondents who chose “Not wrong at all”, 0 for the other responses, and NaN for people who were not asked the question, did not respond, or chose “Other”. In [32]: homosex_recode = { 1: 0, 2: 0, 3: 0, 4: 1, 5: np.nan, gss['homosex_recode'] = gss['homosex'].replace(homosex_recode) In [33]: 0.0 29622 1.0 12956 NaN 29812 Name: count, dtype: int64 The mean of this variable is the fraction of respondents who chose “Not wrong at all”. In [34]: Now we can see how this fraction has changed over time. In [35]: percent_series = gss.groupby('year')['homosex_recode'].mean() * 100 In [36]: plot_series_lowess(percent_series, 'C4') title='Percent responding "Not wrong at all"') The percentage of people who accept homosexuality was almost unchanged in the 1970s and 1980s, and began to increase quickly around 1990. For a discussion of this trend, and similar trends related to racism and sexism, you might be interested in Chapter 11 of Probably Overthinking It. Computing the mean of an ordinal variable can be a quick way to make comparisons between groups or show trends over time. The computation implicitly assumes that the points on the scale are equally spaced, which is not true in general, but in many cases it is close enough. However, even when the mean (or standard deviation) is a reasonable choice, there is often an alternative that is easier to interpret in context. It’s always good to look at the distribution before choosing a summary statistic. If it’s bimodal, the mean is probably not the best choice. Data Q&A: Answering the real questions with Python Copyright 2024 Allen B. Downey License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
{"url":"https://www.allendowney.com/blog/page/3/","timestamp":"2024-11-14T18:24:03Z","content_type":"text/html","content_length":"1048997","record_id":"<urn:uuid:b49293da-2613-4245-bd86-cd22f3e3c98b>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00342.warc.gz"}
Homeschool math curriculum with His Vessel Textbooks - Algebra 1 ~ a TOS review Disclaimer: I received a Complimentary copy of this product through the HOMESCHOOL REVIEW CREW in exchange for my honest review. I was not required to write a positive review nor was I compensated in any other way. All opinions I have expressed are my own or those of my family. I am disclosing this in accordance with FTC Regulations. Morning friends! We are another week closer to the end of the school year. In fact, today we begin our Spring Break but first I wanted to share with you a new homeschool math curriculum with you that is Biblically based called His Vessel Textbooks – Algebra 1. Wait…what? Yes, you read that right. A math book that has a Biblical worldview incorporated into it. The author, Mary Carroll, a homeschool mom and math lover, created this homeschool Algebra curriculum for Christians who wanted to teach their children math in the context of Scripture. His Vessels Textbooks – Algebra 1 is a thick hardcover book with over 500 pages cover to cover and retails for $99.99. With 11 units and 64 lessons this homeschool math curriculum course covers all of the normal Algebra 1 topics necessary to learn. From Interger's to graphing to the Quadratic equation your child will learn the necessary math while also getting a faith lesson as well. When this curriculum was offered for review I jumped at the chance to use it with Montana, my 9th grader. One because Montana was having trouble understanding how to graph a line and two because I was curious how math could be tied to Scripture. Within each of the 64 lessons in this course you will find several features that tie in Scripture. These sections include: A God Moment: These sections are at the beginning of the lesson and contain a short devotional with scripture explaining the math concept from a Biblical view. Objectives for This Lesson: These are written in an outline format as “I Can” statements so you know what you will be able to do at the finish of the section. Vocabulary: Helpful terms to know for the lesson. You Try: these are practice problems for you to complete. Only the answers are given to these problems, not the solutions. Practice Problems: This is your standard page of practice problems. They must be worked out on paper or in a notebook. Only the odd problems have the answers in the back of the book. Helpful Hints and Make it Clear: These are scattered throughout the lessons and offer clarity on concepts. Family Activity: Fun activities to get the whole family involved in learning Algebra are included in this section. Activities include eating a pie on Pi Day, completing puzzles, or making math shapes with Playdough and point back to Christ. Expression Project: These special projects are optional and found at the end of the unit. They vary in project and are meant to challenge the student. Completion of these is optional. Montana only used this math textbook as a supplement to our regular math program when she needed more understanding of a certain concept. Having access to this homeschool math book is a God send! Math doesn’t come easy for Montana; she has to work at it. For her, as I stated above, was the graphing of a line concept that was not sticking. The way the material is presented in this book has helped her understand how to graph the line and she is more confident in doing so. I love how each of the concepts are explained in an easy-to-understand way. If you need a little extra help you can also watch video lessons found on YouTube. I know that we will continue to reference this math textbook during the rest of the year and it will be a valuable resource for us as my two younger children come through high school math. I am excited to say that Mrs. Carroll is currently working on a Geometry book that will be released in the fall! I definitely have this book on my radar because anything that can make Geometry easier to understand is worth the price in gold to me. She also plans to release Algebra 2, Pre-Algebra, and elementary math series in the future. Hands-down, I think you should get your hands on a copy of this Algebra book and check it out for yourself. Don’t just take my word for it though, seven of my CrewMates also had the chance to review this homeschool algebra 1 book. Click on the banner below and read their thoughts as well. Format ~ hardcover textbook Price ~ $99.99 Ages ~ high school math Find His Vessel Textbooks - Algebra 1 on these Social Media sites: © 2008 - 2022 A Stable Beginning. All rights reserved. All photographs, text, artwork, and other content may not be reproduced or transmitted in any form without the written permission of the
{"url":"https://expertreviewslist.com/blogs/feed/homeschool-math-curriculum-with-his-vessel-textbooks-algebra-1-a-tos-review","timestamp":"2024-11-09T23:26:09Z","content_type":"text/html","content_length":"91584","record_id":"<urn:uuid:99a15aa2-afac-4b65-9196-c1666d504aec>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00048.warc.gz"}
Imputation Method IRMI Wolfgang Rannetbauer In addition to Model based Imputation Methods (see vignette("modelImp")) the VIM package also presents an iterative imputation method. This vignette showcases the function irmi(). IRMI is short for Iterative Robust Model-based Imputation. This method can be used to generate imputations for several variables in a dataset. Basically irmi() mimics the functionality of IVEWARE (Raghunathan et al., 2001), but there are several improvements with respect to the stability of the initialized values, or the robustness of the imputed values. In contrast to other imputation methods, the IRMI algorithm does not require at least one fully observed variable. In each step of the iteration, one variable is used as a response variable and the remaining variables serve as the regressors. Thus the “whole” multivariate information will be used for imputation in the response variable. For more details, please see IRMI The following example demonstrates the functionality of irmi() using a subset of sleep. The columns have been selected deliberately to include some interactions between the missing values. dataset <- sleep[, c("Dream", "NonD", "BodyWgt", "Span")] dataset$BodyWgt <- log(dataset$BodyWgt) dataset$Span <- log(dataset$Span) The plot indicates several missing values in Dream, NonD, and Span. Imputing multiple variables The call of the function is straightforward and the algorithm usually converges in a few iterations. We can see that irmi() imputed all missing values for all variables in our dataset. Diagnosing the results As we can see in the next plot, for imputing missing values in NonD Bodygt plays an important role. The original data structure of NonD and BodyWgt is preserved by the irmi() imputation method. The same is true for the data structure of Span and BodyWgt. Performance of method In order to validate the performance of irmi() and to highlight the ability to impute different datatypes the iris dataset is used. Firstly, some values are randomly set to NA. df <- iris colnames(df) <- c("S.Length", "S.Width", "P.Length", "P.Width", "Species") # randomly produce some missing values in the data nbr_missing <- 50 y <- data.frame(row = sample(nrow(iris), size = nbr_missing, replace = TRUE), col = sample(ncol(iris), size = nbr_missing, replace = TRUE)) y <- y[!duplicated(y), ] df[as.matrix(y)] <- NA We can see that there are missings in all variables and some observations reveal missing values on several points. The plot indicates that all missing values have been imputed by the IRMI algorithm. The following table displays the rounded first five results of the imputation for all variables.
{"url":"https://cran.opencpu.org/web/packages/VIM/vignettes/irmi.html","timestamp":"2024-11-02T17:42:17Z","content_type":"text/html","content_length":"686066","record_id":"<urn:uuid:6378289b-1161-4f92-be07-eea73eace3f0>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00307.warc.gz"}
SI and CI ( साधारण तथा चक्रवृद्धि ब्याज ) #CompoundInterest #DeveshSir “Devesh Sir Presents “Compound Interest Tricks” Compound Interest – 10 | Compound interest Tricks | Simple and Compound Interest| Common eligibility test Compound interest tricks in Hindi and English. Compound Interest Tricks and formula. CI and SI Tricks and Questions common eligibility test
{"url":"https://mathd.in/category/si-and-ci/","timestamp":"2024-11-03T09:20:38Z","content_type":"text/html","content_length":"69621","record_id":"<urn:uuid:7caaa38e-cfcf-4f6f-9179-954d8690b255>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00637.warc.gz"}
Unscramble EARTHED How Many Words are in EARTHED Unscramble? By unscrambling letters earthed, our Word Unscrambler aka Scrabble Word Finder easily found 121 playable words in virtually every word scramble game! Letter / Tile Values for EARTHED Below are the values for each of the letters/tiles in Scrabble. The letters in earthed combine for a total of 17 points (not including bonus squares) • E [1] • A [1] • R [5] • T [3] • H [4] • E [1] • D [2] What do the Letters earthed Unscrambled Mean? The unscrambled words with the most letters from EARTHED word or letters are below along with the definitions. • earth (n.) - The globe or planet which we inhabit; the world, in distinction from the sun, moon, or stars. Also, this world as the dwelling place of mortals, in distinction from the dwelling place of spirits. • heart (n.) - A hollow, muscular organ, which, by contracting rhythmically, keeps up the circulation of the blood.
{"url":"https://www.scrabblewordfind.com/unscramble-earthed","timestamp":"2024-11-13T08:29:53Z","content_type":"text/html","content_length":"56788","record_id":"<urn:uuid:bd5b00ca-3eb5-4ec4-99d9-722d3151292d>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00372.warc.gz"}
What our customers say... Thousands of users are using our software to conquer their algebra homework. Here are some of their experiences: All my skepticisms about this program were gone the first time I took a test and did not have to struggle through it. Thank you. Cathy Dixx, OH My son used to hate algebra. Since I have purchased this software, it has surprisingly turned him to an avid math lover. All credit goes to Algebrator. Seth Lore, IA I can't tell you how happy I am to finally find a program that actually teaches me something!!! Jessica Short, NJ As proud as my Mom was every time I saw her cheering me after I ran in a touchdown from 5 yards out, she was always just as worried about my grades. She said if I didnt get my grades up, nobody would ever give me a scholarship, no matter how many rushing yards I got. Even when my coach showed me your program, I didnt want no part of it. But, it started making sense. Now, I do algebra with as much confidence as play football and my senior year is gonna be my best yet! John Dixon, MI It is tremendous. Any difficult problem and I get the step-by-step. Not seen anything better than this. All that I can say is it seems I got a personal tutor for me. Warren Mills, CA Search phrases used on 2010-10-27: Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among • Square root sample sheets • rules of exponents hands on activities • how to do college algebra • free algebra worksheets with a lot of questions • free worksheet simplifying radicals • online algebraic calculator • Precalculus Math problem solver • long division of equations software • solving y-intercept • order of operations to find answers • algebra help/factoring monomials from polynomials • substitution method calculator online • how do I convert a mixed fraction into a decimal point • how to convert a percentage into a decimal • glencoe algebra 2 math worksheets • online differential calculator • differentiate using money to help learn how to multiply decimals • 7th grade algebra practice • how to program the quadratic formula in calculator • lessons on compund words • MATH EXERCISEONLINE FOR KIDS • can we use rational expressions in every day • online graphing calculator mod • finding roots third order polynomial • free 6th grade worksheets • comparing and ordering fractions 7th grade free worksheets • pie squared math problem\ • least common multiple word problems • riemann sum calculator application • tell me sites to solve my mathmatical problems • changing fractions to higher terms free worksheets • solving third-order equation • calculate polynom program • algebra 1 mcdougal littell answers • 9th grade math online free • multiplying fractions on a ti-83 calculator • math answers algebra 2 • algebra 1 free online tutoring • prentice hall pre-algebra answer pages • algebraic problem solver with work • free learning how to divide polynomials • scale factors real life applications • number in front of the square root • 83 factored • define lineal metre • glenco mathamatics • radicands on scientific calculator • basic ratio formula • matlab solve equation for unknown • use the formula method standard form • different of two square • free practice sheets algerbra print outs • matlab y=ax2+bx+c • online inequality word problem solver • highest common factor of 60 • gr. 7 math and equations for tables samples • grade nine math tests • Solving Systems of Linear Equations Real Life Examples • ti89 pdf • ti 89 rom image download • add and subtract equations worksheet • printable algebra games • what is the least common denominator of 21 and 14? • algebra calculator reducing rational expression • linear programming in graphing calculator • Equation Writer de Creative Software Design • symmetric polynomials\free for download\pdf • hyperbolas equation on ti 83 • mcdougal Algebra 2 answers • Simple tips to pass the ACT • prealgerbra problems • addition properties worksheets • online calculaters for use • free scale factor sheets • Hornsby Intermediate Algebra ninth edition study guide • math poems exponents • adding,subtracting,multiplying,and dividing integers worksheets • worksheet for multiplying and dividing integers • "graphing calculator" +free +online +"table of values" • transformation worksheets geometry elementary • how to solve simultaneous linear equations TI-89 • "synthetic long division" + "accounting" • online quadratic solver • algebra age word problems • factorer quadratic • ti 83 graphing calculator online • inequalities powerpoint • decimal converter to square root fraction • sol algebra games • math problems seventh grade scale factors • addition and subtraction formulas questions • Graphing Slope Intercept Form Worksheets • 2nd math print outs • mathcad convert decimals to binary • solving multiple equations in excel • converting decimals into words • palindrome java program ignore punctuation • Scale Factor worksheets printables • what is the Lowest commom multiple of 50 and 75
{"url":"https://softmath.com/math-book-answers/multiplying-fractions/calaculate-algebra-greatest.html","timestamp":"2024-11-10T11:09:17Z","content_type":"text/html","content_length":"36317","record_id":"<urn:uuid:27c6f9e7-3d3f-4d43-af8d-0d6880b18972>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00361.warc.gz"}
Derivative - (Differential Calculus) - Vocab, Definition, Explanations | Fiveable from class: Differential Calculus A derivative represents the rate at which a function changes at any given point, essentially capturing the slope of the tangent line to the curve of that function. This concept is fundamental in understanding how functions behave, especially when analyzing instantaneous rates of change, optimizing functions, and solving real-world problems involving motion and growth. congrats on reading the definition of Derivative. now let's actually learn it. 5 Must Know Facts For Your Next Test 1. The derivative of a function at a point can be interpreted as the limit of the average rate of change of the function over an interval as that interval approaches zero. 2. The notation for derivatives includes symbols like $$f'(x)$$, $$\frac{dy}{dx}$$, or $$D[f(x)]$$, depending on the context and specific conventions used. 3. The derivative can be used to find critical points where functions may have local maxima or minima, which is essential for optimization problems. 4. Understanding how to compute derivatives using rules like the product rule, quotient rule, and chain rule is vital for effectively handling complex functions. 5. Derivatives are also important in real-life applications, such as determining speed (rate of change of position) and acceleration (rate of change of speed) in motion. Review Questions • How does the concept of a derivative relate to the tangent line problem and what does it imply about the behavior of functions? □ The derivative is directly linked to the tangent line problem as it gives us the slope of the tangent line at any given point on a curve. This slope indicates how steeply the function rises or falls at that point, providing insight into the function's behavior. By understanding this relationship, we can analyze how functions change and predict their future behavior based on local properties. • What role do derivatives play in analyzing rates of change in real-world scenarios such as motion? □ Derivatives are crucial for understanding rates of change in various contexts, especially in motion. For instance, when studying an object's position over time, the derivative provides its instantaneous velocity, which tells us how fast and in which direction it's moving at any moment. This application helps in solving practical problems related to speed and acceleration in physics and engineering. • Discuss how Rolle's Theorem relates to derivatives and its significance in finding roots of functions. □ Rolle's Theorem states that if a function is continuous on a closed interval and differentiable on the open interval between those endpoints with equal function values at both ends, then there exists at least one point within that interval where the derivative equals zero. This theorem highlights the existence of stationary points, which can help identify potential maximum or minimum values and is instrumental in locating roots of functions. Its application ensures that if certain conditions are met, we can confidently assert that critical points exist for further © 2024 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
{"url":"https://library.fiveable.me/key-terms/differential-calculus/derivative","timestamp":"2024-11-03T12:01:35Z","content_type":"text/html","content_length":"167524","record_id":"<urn:uuid:d050b831-a0a8-4943-bcd6-e0a952a65740>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00604.warc.gz"}
What is the difference between solving a rational equation and simplifying a rational expression? what is the difference between solving a rational equation and simplifying a rational expression? Related topics: free teach myself math software difference between homework and worksheet factoring algebraic equations hbj math for first grade squre footage calculater algebra word problems worksheets grade 7 Practice Hall Mathematics Pre Algebra 9th grade math formulas mathematical symmetrywith triangle integer rules addition subtraction multiplication division complex mathematical equation solving expressions functions for combinations and permutations Author Message Gaifoy Posted: Thursday 06th of Aug 08:30 Hey friends , I have just completed one week of my college, and am getting a bit worried about my what is the difference between solving a rational equation and simplifying a rational expression? home work. I just don’t seem to grasp the topics. How can one expect me to do my homework then? Please help me. From: HkG SAR Back to top Jahm Xjardx Posted: Thursday 06th of Aug 15:53 The best way to get this done is using Algebrator . This software provides a very fast and easy to learn method of doing math problems. You will definitely start liking algebra once you use and see how effortless it is. I remember how I used to have a hard time with my Algebra 2 class and now with the help of Algebrator, learning is so much fun. I am sure you will get help with what is the difference between solving a rational equation and simplifying a rational expression? problems here. From: Odense, Denmark, EU Back to top Jot Posted: Saturday 08th of Aug 07:43 Hey Friend , Algebrator helped me with my assignments last month . I got the Algebrator from https://softmath.com/. Go ahead, check that and let us know your opinion. I have even recommended Algebrator to a list of of my friends at college. From: Ubik Back to top TempnaStoln22 Posted: Monday 10th of Aug 09:21 I would love to try this tool if it can really help me learn math quickly. I like maths, but after the job, I don’t have any energy left in my body to solve equations . Back to top Momepi Posted: Monday 10th of Aug 18:05 I recommend trying out Algebrator. It not only assists you with your math problems, but also displays all the required steps in detail so that you can improve the understanding of the From: Ireland Back to top Jrobhic Posted: Wednesday 12th of Aug 09:12 There you go https://softmath.com/links-to-algebra.html. Back to top
{"url":"https://softmath.com/algebra-software-3/what-is-the-difference-between.html","timestamp":"2024-11-14T22:00:08Z","content_type":"text/html","content_length":"43052","record_id":"<urn:uuid:afc08e3a-2af3-4da1-b398-df71db95915a>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00880.warc.gz"}
CCL.NET 1997.05.11-003 │ From: │ Alan.Shusterman # - at - # directory.Reed.EDU (Alan Shusterman) │ │ Date: │ 11 May 97 16:56:57 PDT │ │ Subject: │ Summary: Symmetry Bug │ The following note summarizes replies and answers I received to my question about an apparent symmetry bug in Spartan 4.1. Here is my original post: I run into the following bug every few weeks: I build a molecule with a certain symmetry (Cs in this case) and optimize its structure with symmetry ON. Then I try to reoptimize its structure with a larger basis set while still using symmetry, but Spartan stops with the following error message: Molecule (C1) and archive (CS) have differenty symmetry. Calculation Can anyone tell me at what point the molecule adopted C1 symmetry? (The input file and the molecule on the scree both still have CS symmetry). Does anyone have any advice on how to proceed? (I really want to do the larger basis set with symmetry, so I need to convince Spartan that the molecule does have CS Note: transferring the coordinates in the archive (CS) to the input file doesn't help (Spartan already did this automatically when I set up the calculation for the larger basis set). Spartan still thinks the molecule is C1 and the archive is CS. Most of the replies that I received fell into one of two categories: 1. use some other software (sound advice in this case; see below) 2. double-check the coordinates in the input file to make sure they really have the expected symmetry. This turns out not to be important in this case (see My own comment on suggestion #2 is that the molecule I began with really did have CS symmetry, so this symmetry was maintained during the initial (small basis set) optimization, and the final geometry stored in the archive file had this symmetry. When I setup a reoptimization with a larger basis set, Spartan automatically copies the symmetric "archive" geometry over the "input" geometry, i.e., it makes a new input file, but the input geometry, though changed, still has CS symmetry. The real problem is that Spartan was failing to recognize an obviously symmetric structure. There is no need to double-check the coordinates (I think). The "solution" to my problem appears to be this: SPARTAN 4.1 DOES NOT USE SYMMETRY IN CALCULATIONS INVOLVING DIFFUSE BASIS FUNCTIONS. I don't know if this is documented anywhere, but it is the source of my trouble - my initial optimization (Cs symmetry) did not use diffuse functions, but the reoptimization did call for diffuse functions. Here is a test that I ran to verify the problem: I built bent HOF. This molecule MUST have CS symmetry because it is a nonlinear triatomic. I saved several identical input files and used each one to start a geometry optimization using a different basis set. The optimizations using standard basis sets (STO-3G, 3-21G(*), 6-31G*, 6-31G**, 6-311G**) all recognized and used the molecule's CS symmetry. The optimizations using diffuse basis sets (3-21+G(*), 6-31+G**, 6-31++G**) all disabled symmetry first and carried out the optimization for a C1 molecule. The same behavior was observed whether the calculations were done in memory or DIRECT. Until this is fixed, Spartan users should realize that optimizations for symmetric molecules using diffuse basis sets cannot be carried out with the apparent symmetry. Also, they cannot be "restarted" using wavefunctions or Hessians derived from nondiffuse (i.e., high symmetry) archives. If user knows in advance that calculations with diffuse functions will be needed, then s/he should either disable symmetry at the outset, or not use the "restart" option when using the diffuse basis sets. My thanks to all who contributed. Alan Shusterman Department of Chemistry Reed College Portland, OR Similar Messages 05/08/1997: Symmetry bug 04/29/1994: Summary on symmetry (what a rhyme) 06/19/1997: Re: SPARTAN: Summary: Symmetry Bug 10/16/1992: Re: Spin density calculations for benzene (fwd) 04/26/1994: Re: CCL:Use of symmetry in optimisations 02/16/1994: determining point groups 10/13/1999: symmetry in G98 05/20/1994: Summary on symmetry, part two 04/26/1994: Re: Symmetry 04/23/1992: Huckel MO Theory software Raw Message Text
{"url":"https://server.ccl.net/cgi-bin/ccl/message.cgi?1997+05+11+003","timestamp":"2024-11-09T04:41:41Z","content_type":"text/html","content_length":"11438","record_id":"<urn:uuid:a64a76d1-2e49-4993-b350-8a21c00d5ec5>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00620.warc.gz"}
Parkfield Curriculum - Year 6 Mathematics OLD • Read, write, order and compare numbers up to 10,000,000 and determine the value of each digit. • Round any whole number to a required degree of accuracy. • Use negative numbers in context, and calculate intervals across zero. • Solve number and practical problems that involve all of the above. Addition, Subtraction, Multiplication and Division • Solve addition and subtraction multi step problems in contexts, deciding which operations and methods to use and why. • Multiply multi -digit number up to 4 digits by a 2 -digit number using the formal written method of long multiplication. • Divide numbers up to 4 digits by a 2 -digit whole number using the formal written method of long division, and interpret remainders as whole number remainders, fractions , or by rounding as appropriate for the context. • Divide numbers up to 4 digits by a 2 -digit number using the formal written method of short division, interpreting remainders according to the context. • Perform mental calculations, including with mixed operations and large numbers. • Identify common factors, common multiples and prime numbers. • Use their knowledge of the order of operations to carry out calculations involving the four operations. • Solve problems involving addition, subtraction, multiplication and division. • Use estimation to check answers to calculations and determine in the context of a problem, an appropriate degree of accuracy • Use common factors to simplify fractions; use common multiples to express fractions in the same denomination. • Compare and order fractions, including fractions > 1 • Generate and describe linear number sequences (with fractions) • Add and subtract fractions with different denominations and mixed numbers, using the concept of equivalent fractions. • Multiply simple pairs of proper fractions, writing the answer in its simplest form [for example 1/4 x 1/2 = 1/8 ] • Divide proper fractions by whole numbers [for example 1/3 ÷ 2 = 1/6 ] • Associate a fraction with division and calculate decimal fraction equivalents [ for example, 0.375] for a simple fraction [for example 38 ] • Recall and use equivalences between simple fractions, decimals and percentages, including in different contexts. Geometry: Position and Direction • Describe positions on the full coordinate grid (all four quadrants). • Draw and translate simple shapes on the coordinate plane, and reflect them in the axes. • Identify the value of each digit in numbers given to 3 decimal places and multiply numbers by 10, 100 and 1,000 giving answers up to 3 decimal places. • Multiply one-digit numbers with up to 2 decimal places by whole numbers. • Use written division methods in cases where the answer has up to 2 decimal places. • Solve problems which require answers to be rounded to specified degrees of accuracy. • Solve problems involving the calculation of percentages [for example, of measures and such as 15% of 360] and the use of percentages for comparison. • Recall and use equivalences between simple fractions, decimals and percentages including in different contexts. • Use simple formulae. • Generate and describe linear number sequences. • Express missing number problems algebraically. • Find pairs of numbers that satisfy an equation with two unknowns. • Enumerate possibilities of combinations of two variables. Measurement: Converting Units • Solve problems involving the calculation and conversion of units of measure, using decimal notation up to three decimal places where appropriate. • Use, read, write and convert between standard units, converting measurements of length, mass, volume and time from a smaller unit of measure to a larger unit, and vice versa, using decimal notation to up to 3dp. • Convert between miles and kilometres. Measurement: Perimeter, Area and Volume • Recognise that shapes with the same areas can have different perimeters and vice versa. • Recognise when it is possible to use formulae for area and volume of shapes. • Calculate the area of parallelograms and triangles. • Calculate, estimate and compare volume of cubes and cuboids using standard units, including cm3 , m3 and extending to other units (mm3 , km3 ) • Solve problems involving the relative sizes of two quantities where missing values can be found by using integer multiplication and division facts. • Solve problems involving similar shapes where the scale factor is known or can be found. • Solve problems involving unequal sharing and grouping using knowledge of fractions and multiples. Geometry: Properties of Shape • Draw 2-D shapes using given dimensions and angles. • Compare and classify geometric shapes based on their properties and sizes and find unknown angles in any triangles, quadrilaterals and regular polygons. • Recognise angles where they meet at a point, are on a straight line, or are vertically opposite, and find missing angles. • Illustrate and name parts of circles, including radius, diameter and circumference and know that the diameter is twice the radius. • Interpret and construct pie charts and line graphs and use these to solve problems. Calculate the mean as an average.
{"url":"https://curriculum.parkfieldprimary.com/mathematics/old-units/year-6-mathematics-old","timestamp":"2024-11-07T00:54:28Z","content_type":"text/html","content_length":"206276","record_id":"<urn:uuid:00295295-c3cd-46a3-afb4-d18a33b78fa6>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00531.warc.gz"}
From a (p, 2)-Theorem to a Tight (p, q)-Theorem A family F of sets is said to satisfy the (p, q)-property if among any p sets of F some q have a non-empty intersection. The celebrated (p, q)-theorem of Alon and Kleitman asserts that any family of compact convex sets in R^d that satisfies the (p, q)-property for some q≥ d+ 1 , can be pierced by a fixed number (independent of the size of the family) f[d](p, q) of points. The minimum such piercing number is denoted by HD[d](p, q). Already in 1957, Hadwiger and Debrunner showed that whenever q>d-1dp+1 the piercing number is HD[d](p, q) = p- q+ 1 ; no tight bounds on HD[d](p, q) were found ever since. While for an arbitrary family of compact convex sets in R^d, d≥ 2 , a (p, 2)-property does not imply a bounded piercing number, such bounds were proved for numerous specific classes. The best-studied among them is the class of axis-parallel boxes in R^d, and specifically, axis-parallel rectangles in the plane. Wegner (Israel J Math 3:187–198, 1965) and (independently) Dol’nikov (Sibirsk Mat Ž 13(6):1272–1283, 1972) used a (p, 2)-theorem for axis-parallel rectangles to show that HD[rect](p, q) = p- q+ 1 holds for all q≥2p. These are the only values of q for which HD[rect](p, q) is known exactly. In this paper we present a general method which allows using a (p, 2)-theorem as a bootstrapping to obtain a tight (p, q)-theorem, for classes with Helly number 2, even without assuming that the sets in the class are convex or compact. To demonstrate the strength of this method, we show that HD[d][-box](p, q) = p- q+ 1 holds for all q> c^′log ^d^-^1p, and in particular, HD[rect](p, q) = p- q+ 1 holds for all q≥ 7 log [2]p (compared to q≥2p, obtained by Wegner and Dol’nikov more than 40 years ago). In addition, for several classes, we present improved (p, 2)-theorems, some of which can be used as a bootstrapping to obtain tight (p, q)-theorems. In particular, we show that any class G of compact convex sets in R^d with Helly number 2 admits a (p, 2)-theorem with piercing number O(p^2^d^-^1) , and thus, satisfies HD [G](p, q) = p- q+ 1 , for a universal constant c. • (p,q)-Theorem • Axis-parallel rectangles • Convexity • Hadwiger–Debrunner numbers • Helly-type theorems Dive into the research topics of 'From a (p, 2)-Theorem to a Tight (p, q)-Theorem'. Together they form a unique fingerprint.
{"url":"https://cris.ariel.ac.il/en/publications/from-a-p2-theorem-to-a-tight-pq-theorem-3","timestamp":"2024-11-09T09:48:16Z","content_type":"text/html","content_length":"58732","record_id":"<urn:uuid:5bd18b06-b500-420a-a845-1a9382ce1416>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00180.warc.gz"}
Framing Square rafter extending beyond the outer edge of the plate. A measure line (fig. 2-4, view B) is an imaginary reference line laid out down the middle of the face of a rafter. If a portion of a roof is represented by a right triangle, the measure line corresponds to the hypotenuse; the rise to the altitude; and, the run to the base. A plumb line (fig. 2-4, view C) is any line that is vertical (plumb) when the rafter is in its proper position. A level line (fig. 2-4, view C) is any line that is horizontal (level) when the rafter is in its proper position. FRAMING SQUARE LEARNING OBJECTIVE: Upon completing this section, you should be able to describe and solve roof framing problems using the framing square. The framing square is one of the most frequently used Builder tools. The problems it can solve are so many and varied that books have been written on the square alone. Only a few of the more common uses of the square can be presented here. For a more detailed discussion of the various uses of the framing square in solving construction problems, you are encouraged to obtain and study one of the many excellent books on the square. DESCRIPTION The framing square (fig. 2-5, view A) consists of a wide, long member called the blade and a narrow, short member called the tongue. The blade and tongue form a right angle. The face of the square is the side one sees when the square is held with the blade in the left hand, the tongue in the right hand, and the heel pointed away from the body. The manufacturer s name is usually stamped on the face. The blade is 24 inches long and 2 inches wide. The tongue varies from 14 to 18 inches long and is 1 1/2 inches wide, measured from the outer corner, where the blade and the tongue meet. This corner is called the heel of the square. The outer and inner edges of the tongue and the blade, on both face and back, are graduated in inches. Note how inches are subdivided in the scale on the back of the square. In the scales on the face, the inch is subdivided in the regular units of carpenter s measure (1/8 or 1/16 inch). On the back of the square, the outer edge of the blade and tongue is graduated in inches and twelfths of inches. The inner edge of the tongue is graduated in inches and tenths of inches. The inner edge of the blade is graduated in inches and thirty-seconds of Figure 2-5. Framing square: A. Nomenclature; B. Problem solving. inches on most squares. Common uses of the twelfths scale on the back of the framing square will be described later. The tenths scale is not normally used in roof framing. SOLVING BASIC PROBLEMS WITH THE FRAMING SQUARE The framing square is used most frequently to find the length of the hypotenuse (longest side) of a right triangle when the lengths of the other two sides are known. This is the basic problem involved in determining the length of a roof rafter, a brace, or any other member that forms the hypotenuse of an actual or imaginary right triangle. Figure 2-5, view B, shows you how the framing square is used to determine the length of the hypotenuse of a right triangle with the other sides each 12 inches long. Place a true straightedge on a board and set the square on the board so as to bring the 12-inch mark on 2-4
{"url":"https://constructionmanuals.tpub.com/14044/css/Framing-Square-56.htm","timestamp":"2024-11-12T05:59:07Z","content_type":"text/html","content_length":"26729","record_id":"<urn:uuid:6b51a4cb-8a0a-4ee8-8be1-7fe6b69026ad>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00790.warc.gz"}
Derivative at a point Peter, could you please explain why you prefer the title "differential quotient"? I haven't studied mathematics in English for some time, but I still feel that "derivative" is the more common name. Formally, the derivative should be the limit of the differential quotient as h approaches zero, but in my mind they are not the same concept. Johan A. Förberg 22:08, 21 January 2011 (UTC) I see a subtle difference: □ The differential quotient of f at x is the limit of the difference quotients at x (only one particular point considered), □ while the derivative of f is the function with values equal to the differential quotient (the full dominion of the function is considered). (The redirect is not final, "derivative" should have its own page, as should have "derivation".) Peter Schmitt 00:54, 22 January 2011 (UTC) OK, I see your point. But as the article reads now, it only confuses the reader further as to the difference between the derivative and the d.q. Johan A. Förberg 23:34, 22 January 2011 (UTC) I never met the term "differential quotient". Wikipedia has no such article, and moreover, its search gives no results. Google gives first 5 results that contain in fact only "difference quotient", but result no. 6 (dictionary.com) mentions "differential quotient" as item 6 in "derivative". --Boris Tsirelson 06:34, 23 January 2011 (UTC) Yes, I was also surprised that it popped up so rarely, but it does so in different places, including research papers. Could it be a Germanism? The term is very usual in German. I'll try to find out more in the literature -- old and new. This may help to deal with it properly. From a didactical perspective, it is a rather useful distinction -- e.g., you need a derivative (function) before you can talk about s second derivative. --Peter Schmitt 10:58, 23 January 2011 (UTC) I believe the confusion goes back to the Newton-Leibniz controversy. Newton talked about fluents and fluxions and Leibniz about differentials. The continent followed Leibniz (one of the first things I learned in Delft, a continental city, was the word "differential quotient") while England stayed with Newton. In the 19th century the British changed slowly to the Leibniz notation, but they did not adapt his complete terminology. The typical British book by Wittaker-Watson (1902) doesn't use the term "differential quotient", while the modern German DTV-Atlas zur Mathematik gives it. As far as I know there is no clear distinction between derivative and DQ. For what it is worth I (not being a mathematician) would simply write ${\displaystyle {\frac {df(x)}{dx}}:=\lim _{h\rightarrow 0}{\frac {f(x+h)-f(x)}{h}}=\lim _{\Delta x\rightarrow 0}{\frac {\Delta f}{\Delta x}}.}$ --Paul Wormer 12:58, 23 January 2011 (UTC) I am not and never have been a professional mathematician, although I use mathematics. It may, indeed, be linguistic. From the 1960s, I still have several American calculus textbooks, and none appear to use the term. While I think I understand Peter's distinction between instantaneous and range, my sense is that is an advanced point. Renaming articles, I believe, needs some discussion first. Incidentally, have any of you read the New York Times series on popularizing "advanced mathematics" -- advanced to the layman? It has a nice introduction to the idea and history of derivatives, which it compares and contrasts, historically, to integrals. Howard C. Berkowitz 18:36, 23 January 2011 (UTC) Peter, would you be willing to revert your page move? Johan A. Förberg 21:21, 23 January 2011 (UTC) German WP: Hierzu dient die Ableitung (auch Differentialquotient genannt). [The derivative (also called differential quotient) serves to this end.] --Paul Wormer 08:45, 24 January 2011 (UTC) The actual English equivalent of dq is "differential coefficient". Peter Jackson 12:10, 24 January 2011 (UTC) As far as I can tell (e.g., from the WP article) the differential coefficient is again the derivative (function). As I said previously, I'll try to research this before I form an opinion. --Peter Schmitt 12:22, 24 January 2011 (UTC) Without researching, it seems to be usual among mathematicians, to say either "the derivative of f at x equals k", or "the derivative of f equals g". In the former case it is a single number, in the latter case a function, but the term "derivative" can serve both. --Boris Tsirelson 16:17, 24 January 2011 (UTC) I happened to be reading Stephen Hawking's translation of Einstein's 1916 paper and saw the term "differential quotient" for ${\displaystyle \chi ={\frac {d\psi }{ds}}}$. --Paul Wormer 13:21, 26 January 2011 (UTC) Could he have been just translating literally? Peter Jackson 10:38, 2 February 2011 (UTC) "Derivative" as an article title is preferable to "Differential quotient" In my experience the term "differential quotient" hardly ever comes up, while "derivative" and "differential" occur commonly. I'd say the title of this page is now presenting a secondary, minor usage, at least as far as American and English usage. To be re-directed from "derivative" to "differential quotient" appears to suggest that "differential quotient" is the more common and preferred term, which I'd dispute. John R. Brews 19:12, 22 February 2011 (UTC) An historical account can be found in Boyer, p. 275 where the term "differential quotient" is described as an invention of Leibniz in a formulation based upon differentials, but subsequently overturned by Cauchy, who introduced the derivative in terms of limits and the term "differential" in terms of the derivative. This source attributes to Cauchy a formal precision previously lacking. In my view this settles the matter that the article should be returned to the title "Derivative", and a Redirect used to send "Differential quotient" to this page. If more is wanted, this google book search turns up 119,000 results for "differential quotient", of which many are unrelated to derivative. On the other hand, this google book search turns up 1.8 million hits for "derivative". John R. Brews 22:45, 22 February 2011 (UTC) I now moved the page ("Derivative at a point"). Essentially, it only deals with that case, and the derivatve function deserves its own page, I think. By the way, because of this difference the Google search is not as overwhelmingly convincing as it may seem at first glance. -Peter Schmitt 14:16, 23 February 2011 (UTC) The term "derivative at a point" can be seen to be separate from the "derivative function" assembled from all the "derivative-at-a-point" values. However, the distinction is seldom used. This google book search turns up only 2,330 hits for "derivative at a point", showing much less usage than either "derivative" or "differential quotient". Just what is the purpose of separate articles on these two topics? Does it reflect common usage of two quite different ideas with widely varying application? Or, does it only emphasize a distinction that is occasionally useful, but not often found in everyday science or engineering? John R. Brews 20:18, 23 February 2011 (UTC) In everyday science or engineering probably not, indeed. But still, mathematically, differentiability at a given point has no logical relation to differentiability at any other point. It is possible to define the derivative (function) globally, without first treating each point separately; however, this way probably is seldom used in teaching for mathematicians, and never - for others. That is, the derivative-function is usually treated as just the collection of all derivatives-at-points. --Boris Tsirelson 21:31, 23 February 2011 (UTC) Hi Boris: So, to try to connect the dots here for all but a few special readers, being directed from "derivative" to either an article titled "derivative at a point" or one called "derivative function" would be neither here nor there? If both these articles were written, it would make no never-mind to even the average scientist which article they read: they'd get what was for them the same info in both articles apart from some delicate questions they probably never would think to ask? John R. Brews 23:17, 23 February 2011 (UTC) Boris already answered this. Whether you call it derivative or differential quotient, differentiability is a local phenomenon. It may exist at a few isolated points only. And only in special cases it can be used to define a derivative function. (Even in this case the locally defined limits are used.) That in science functions are usually assumed to be (almost) everywhere differentiable does not change the logical dependence. This should be of interest to all scientists. (And the number of Google hits has no bearing here.) When all articles will be written, derivative will have to be a disambiguation page (not only for mathematics! There are non-mathematical meanings, too). We should have derivative of a function (and derived set) and derivation (or similar) that properly reference each other. (Maybe some other articles, too.) --Peter Schmitt 23:59, 23 February 2011 (UTC) OK: The vision is a very deep CZ with a lot of very detailed explanations and what some might call nuanced or specialized or even hair-splitting articles. That is not how it is at the moment: the vast majority of articles I've looked at stick to the qualitative and make no attempt at what might be called a "treatise" level of sophistication. What is the vision of CZ as you see it? And is it possible that a staged evolution to greater complexity, beginning with rather broad treatments and using them as background introductions later on, would be a suitable route to take, given that there are a million holes in CZ coverage at the moment, never mind this kind of detail? John R. Brews 00:21, 24 February 2011 (UTC) Yes, currently CZ has more gaps and poor articles than good ones. And yes, I hope that some time in the future there will be a lot of "deep and detailed" explanations. But what we are talking about here is neither deep nor hair-splitting: Every textbook has to begin with differentiation at a point, perhaps in form of a tangent. First it has to define f′(x[0]) and to follow with f′ later. --Peter Schmitt 02:13, 24 February 2011 (UTC) All this is another manifestation of a general fact: no text about some mathematics (and not only math, I guess) can satisfy mathematicians, physicists, engineers, economists etc. simultaneously. This is why in the real world (outside wikis) we observe a lot of textbooks (on the same topic) intended for different audience. I believe that it is a fundamental error of both Wikipedia and Citizendium, to try to satisfy everyone by a single text. It leads all the time to conflicts, inevitably. In contrast, "we allow multiple articles, written from different approaches, on individual topics" [1]. --Boris Tsirelson 14:12, 24 February 2011 (UTC) You are right, Boris, some/many topics can profit from multiple articles. But the concept of a single article is older, it comes from printed encyclopedias, I would guess. I hope that CZ will last and develop and flourish, and in the end will have such articles. But now. for most topics, we do not have even one good article, let alone several. (This needs authors, and KI has even less ...) A good example for parallel articles would be vector and tensor analysis. The derivative, however, is not: I don't think that physicists introduce it differently -- the difference comes later, when the average physicist assumes functions to be differentiable (or analytic). In contrast to textbooks, however, CZ should put parallel expositions in relation to each other, and help to bridge the gap between different cultures instead of supporting it. For instance, most physicists will not need to know non-differentiable functions, but they should know that they exist. --Peter Schmitt 23:27, 24 February 2011 (UTC) John, even if there were indeed many readers who do not need/appreciate/want "sophisticated" information there are also other reader (mathematics students, for instance) for whom this is important (and fundamental) information. --Peter Schmitt 00:25, 25 February 2011 (UTC)
{"url":"https://en.citizendium.org/wiki/Talk:Derivative_at_a_point","timestamp":"2024-11-08T11:40:08Z","content_type":"text/html","content_length":"55669","record_id":"<urn:uuid:1031af1e-51bd-4961-8e41-3fb1943631d0>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00214.warc.gz"}
Bisection method C++ Program | Algorithm & Solved Example The Bisection Method is a powerful algorithm that simplifies the process of locating roots within an interval. It is based on the principle of repeatedly dividing an interval and narrowing down the search space until the root is found with a desired level of accuracy. Bisection Method C++ The bisection method is an Algorithm or an Iterative Method for finding the roots of a Non-Linear equation. The convergence in the bisection method is linear which is slow as compared to the other Iterative methods. However, it is the simplest method and it never fails. Also Read: Regula Falsi Method C++ Bisection Method C++ Program //Biscection method c++ #include<math.h> //used for fabs() function. #include<iomanip>//used for setw() and setprecision() //they are used to just manipulate the output. using namespace std; //function definition //it calculates the value of xsinx-1 for different values of x. float f(float x) return x*sin(x)-1; //bisects the interval and counts the number of iterations //by incrementing the value of itr. void bisect(float *x,float a,float b,int*itr) cout<<"iteration no. "<<*itr<< " X = "<<setw(3)<< setprecision(5)<< *x<<endl; int main() int itr=0,maxitr; float x,a,b,aerr,x1; cout<<"Enter the values of a and b , allowed error, maximum iterations"<<endl; if(fabs(x1-x)<aerr)//fabs() calculate the absolute value of (x1-x). cout<<"After "<< itr <<" iterations, root"<< " = "<<setw(6)<< setprecision(4)<<x1<<endl; return 0; cout<<"Solution does not converege,"<<"iterations not sufficient"<<endl; return 0; Code language: PHP (php) In this code, we explore the implementation of the Bisection Method in C++ Program, highlighting its simplicity and effectiveness in solving equations. Bisection Method Rule This method is actually using Intermediate Value Property repeatedly. If a function f(x) is continuous in a closed interval [a,b] and f(a) and f(b) have opposite sign. Then The root lies between a and b and the first approximation of the root is x1=(a+b)/2. Related: Newton Raphson Method Now the root lies between a and x1 or x1 and b accordingly if f(a) and f(x1) have an opposite sign or f(b) and f(x1) have opposite signs respectively. Let the root lies between a and x1, then we again bisect the interval to find the next approximation of the root i.e. x2=(a+x1)/2, and continue the process until the root is found to the desired Related: Gauss Jordon Method C++ In the above figure, then f(x1) is positive and f(x0) is negative so the root lies between x1 and x0. Then we bisect the interval and find x2 and f(x2) is also positive so the root lies between x0 and x2, and we find x3 and so on. Also Read: Gauss Elimination Method C++ Bisection Method Example Find the Root of the equation x^3 – 4x – 9 = 0, using the bisection method correct to three decimal places. Sol. Let f(x) = x^3 – 4x – 9 Since every interval is half of its previous interval, i.e in each step the length of the interval is reduced by a factor of 1/2. MCQ: The convergence in the bisection method is linear.
{"url":"https://techindetail.com/bisection-method-c-code/","timestamp":"2024-11-06T02:53:03Z","content_type":"text/html","content_length":"97439","record_id":"<urn:uuid:558ac63c-949d-4b14-8978-2dbe0d698eca>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00223.warc.gz"}